Terrorist regulation: first assessment and next steps

Posted on

On Wednesday, April 17th, the European Parliament adopted on first reading the Regulation on online “terrorist content” censorship. By a very small majority, it refused to defend us against political censorship or to protect the free and open European web.

The text still enables a judge or the police to ask any platform to delete contents within one hour, which no platform can do without using the automated filtering tools developed by Google and Facebook.

Fortunately, the fight is not over: the text can still be modified in a second reading by the newly elected Parliament. It will be the decision of these new MEPs which will mark the end of the war. And we still have good reasons to hope that in the end, our freedom will prevail.

To prepare this last battle, let us first take stock of the one that just ended.

Origins of the regulation

The European Commission discreetly published its regulation proposal on September 12, 2018, the very same day that all the attention was on a decisive vote of the European Parliament on the Copyright Directive (read our reaction [FR] to the announcement of the terrorism regulation and our reaction from the same day to the Copyright directive vote).

As will be seen later, the fact that this regulation and this directive were debated in parallel by the European Union raised many problems to fight against both of them equally, while revealing a strong general desire from our governments to regulate the Web as a whole in the coming years.

This desire had already been visible as early as 2015. Following a series of deadly attacks in Europe, the European Commission brought together Google, Facebook, Twitter and Microsoft to form the “EU Internet Forum” in order to look for a solution “to protect the public against the diffusion of terrorist content”. The basic idea was already there: to entrust the digital giants with the mission of seeking solutions to our problems.

These giants haven’t missed an opportunity to offer a solution which, albeit not very useful, would strengthen their domination over the rest of the Web and would allow the European governments to improve their image (especially for the French government, with whom Facebook doesn’t even hide their alliance anymore). In June and in December 2017, the European Commission congratulated the four giants for the solution they built: a blacklist containing the digital fingerprint of tens of thousands of images and video categorized as ‘terrorist’ by their moderation services (which mix “artificial intelligence” with thousands of employees exploited [FR] in the poorest countries).

The plan is already explicit: to make sure that all Web services use the blacklist provided by the giants to filter the content they broadcast. All of that without a judge, without any democratic control, without anything. Here is the big project: entrust the GAFAM with the mission of “civilizing” the Internet.

This is also how the directive copyright operates: it writes into law the “solution” invented by GAFAM against the barbaric pirates who, in the fantaisies of the cultural industry, could harm it. To “civilise” the internet, the generalisation of Youtube’s Content-ID model will now ensure the massive exploitation of our personnal data to finance this industry. To rely on the dominant companies is so much easier than to rethink one’s own cultural policy (as criticised in this opinion [FR] written by Félix Tréguer, member of La Quadrature, in Le Monde).

The same inspiration is found in the law proposal “against hate online” filed by Ms. Avia in France a month ago: just before leaving the State Secretariat of digital, Mounir Mahjoubi was explaining how the law should take inspiration from the moderation put in place by Facebook, presented as a hero of the Web (read our column [FR]).

The Original Text

The regulation on the removal of terrorist content, as originally proposed by the European Commission, entrusts control of the Web to the GAFAM in two ways.

Article 4 of the text requires that platforms remove, within one hour, any content notified by the authorities as terrorist. Rarely having technicians working nights and weekends, no small or medium-sized platform can respond to such a demand. These platforms will have no other choice but to block in advance any content which could be categorized as suspicious according to the black list provided by the Web giants. This is precisely the objective announced in 2015 by the Commission, and no Ministers or MEPs that we have spoken with have even made an effort to deny it.

Article 6 states that if a platform does not put in place an automatic filter, despite the great pressure imposed by article 4, the authorities can force it to do so. The authorities can even choose the exact tool to be used, which allows them to designate the tools made by Google, Facebook, Twitter, and Microsoft (however, as discussed below, changes were later made during debates on this specific point).

Sadly, the text is not only a delegation of Internet censorship to the Web giants. It also gives broad powers to Member States so that they can censor the Internet themselves. A removal demand, with article 4 as the legal basis, can come from a judge or from the police, which then act without any judicial authorisation. Similarly, article 5 provides a notification mechanism by which the police or Europol could force a platform to check whether a post conforms to the platform’s own terms of use, which are also required to prohibit the publication of terrorist content (this point also later received modifications, discussed below).

These powers are wide open to misuse by our governments to censor us for political ends (the French police have already done so, using anti-terrorism powers to block the website of far left activists. A year and a half later, this censorship was finally declared [FR] illegal by a court).

A false solution

After a large number of discussions with Ministers in France, during which we criticised this text (read our report), we quickly realised that it was not enough to explain that small and medium-sized platforms will not be able to comply with this regulation, or that it will open the door to political censorship. Because the Ministers clarified quite quickly that it was the very goal of this text.

So we aimed at deconstructing this goal, in three steps.

Firstly by denouncing the absurdity of giving the control of the Web to its giants. Especially given that the “attention economy“, which is the base of their business model, carries so much responsibility in the excessive amount of conflict and anxiety online. Surprisingly, we have encountered very little opposition to this argument, other than the vague idea that “GAFAM is the least-worst of the solutions”.

Secondly, we have tried to take apart the myth of “online self radicalisation“, which claims that an average person can be seized by murderous ideas after watching inadvertently a few videos of terrorism propaganda, and that Internet users must thus be stopped from accessing this content. A report by UNESCO in 2017 has been of great help here: having assessed 550 studies on this issue, it concludes that there is no proof to support this myth, and highlights the harm done to freedom of expression under the auspices of this myth. Since the Ministers and MEPs supporting this regulation have no further factual arguments to oppose, they have told us that their goal was not so much to fight against “online self radicalisation”, but more importantly to fight against already radicalised people using the Internet to organise and plan attacks.

UNESCO Report: Youth and violent extremism on social media: mapping the research

In the third stage we had to explain to them that their regulation would be of very little use for this purpose. The people who support murderous ideologies already communicate on Internet platforms that violate and knowingly bypass the law, and which would laugh at official demands to delete content. No law can impose credible digital solutions to fight against murderous ideologies. This fight can only be won by cultural and structural changes.

As a last resort, our opponents had nothing more than a sort of pseudo “precautionary principle“, pretending that “maybe this will be useless, but you never know, maybe we’ll find later that it was useful” … while sacrificing our liberties and our digital environment to discover this. It’s difficult to believe the sincerity of such an absurd position which seems to reveal their real intention: to boast that they have adopted a symbolic text, as useless and dangerous as it is, just before the European elections.

At the very end, one last argument was raised. The uncontrolled spreading over a long period of the video of the Christchurch attack on Youtube and Facebook showed that the automatic moderation tools of these companies were of no use. Supporters of the killer were able to easily bypass the black list of digital fingerprints by putting the video online in various formats and thus getting past the automatic detection. It was a bitter irony that such a despicable video proved the absolute pointlessness of this regulation.

General discussions

The discussions were very brief. The French and German governments that asked the European Commission to write this text, wanted it to be approved before the elections. The first reading debate lasted barely 8 month (an ominous record for such serious text).

First, the European Council (that gather the union’s governments) discussed the text. Thanks to the commitment of 61 associations and Web actors, we were able to oppose Emmanuel Macron anti-european strategy. This policy favors the GAFAM and violates the separation of powers by allowing political censorship (read the joint letter). Later, on July 6th 2018, the European Council passed a very similar version of the text proposed by the Commission (read our reaction).

As we were fighting against the text, the yello vests movement gathered a lot of momentum in France, and was repressed with very impressive means. We couldn’t refrain from noticing that the so-called anti-terrorist policy would be a choice weapon to censor such social movement, as the European law allows for large interpretations (read our analysis [FR]).

On to the European Parliament

The debates in the European Parliament got off to a bad start. On December 12, the day after a shooting took place in Strasbourg (whilst the MEPs were nearby), the MEPs adopted a report “on findings and recommendations of the Special Committee on Terrorism”. This report, which does not have legislative value but rather expresses the intention of the Parliament, was a sort of beginning for the regulation. The report celebrates the proposal made by the Commission and calls explicitly for the “automatic detection” and “systematic deletion” of terrorist content (see our letter, sent to the MEPs before the adoption of this text, and our reaction to its adoption).

It was not until the end of January that the debates really began in the European Parliament. Daniel Dalton, far-right MEP (ECR group) from the United Kingdom and ready to give away his mandate with the seemingly-immenent Brexit, was the appointed rapporter for the text. As such, his mission was to organise debates in the LIBE Committee (for “civil LIBErties”) for the purpose of proposing a “report” (an amended version of the regulation) which would then be subject to a vote by the whole Parliament.

At the end of January, Dalton proposed a draft report leaving the European Commission’s initial proposal almost unchanged. This refusal to correct the text was quite different to Dalton’s position on the Copyright Directive where he opposed automatic filters, in particular to protect smaller platforms. His position seems completely incoherent (see our analysis). Later, we were able to meet with him on multiple occasions in the Parliament and were able to conclude, to no great surprise, that this incoherence was probably down to the most careerist and trivial aspirations: Dalton fully understood the causes and consequences of the Regulation. He also recognised that the key “deletion in one hour” measure was as absurd as it was dangerous. However, he wanted to keep it at all costs because of its symbolic value.

On February 2019, the debate on the draft report began among the different political groups of the LIBE Committee, in order to find a compromise. We launched our campaign page which, for the remaining two months of discussion, enabled everyone to contact the 61 LIBE members and warn them about the dangers of this text.

The final vote of the LIBE comission for the report was set on March 21st. Before that, in March, the IMCO (“internal market & consumers”) and CULT (“culture”) comissions both shared their views on the text in order to assist the LIBE comission. These views, albeit not perfect at all, finally proposed improvements (read our reaction about the views from the IMCO and CULT commissions). Meanwhile, we met as many MEPs who were part of the LIBE Committee as possible and, after metting Rachida Dati’s team (in charge of leading negociations in the LIBE Committee for the right-wing group PPE), we denounced her position as she was even more dangerous than Dalton’s (read our article [FR]).

Finally accepting the seriousness of the regulation, the LIBE members had to report their vote twice, until April 8th, giving themselves three more weeks to negociate the report. Three weeks seem like a ridiculously short delay for such an important text – it actually is -, but the initial planning did not even gave two months for its debate. The European Commission as well as the Member States got clearly involved in the Parliament’s affairs to (successfully) require that it accepted the text in first reading before the end of its mandate, at the end of April, which needed the urgent adoption of the LIBE report.

You didn’t miss it: all this debate against the terrorism regulation happened at the same time as the European Union was debating the Copyright directive. After the Parliament agreed on a first position on this Copyright directive in September 2018, the text was being negotiated in inter-institutional negotiations (“trilogue”) between the Union Coucil (which gathers the member States’ governments), the European Commission and the MEPs mandated by the Parliament, in order to find a general compromise. These negociations were quite turbulent, with various turnabouts, so that it was very difficult to know when they would come to an end (especially whether it would come before or after the end of the current mandate).

When we launched in February our campaign to call out MEPs against the terrorism regulation, these negociations on the Copyright directive weren’t even done, and we didn’t know which text would be voted first by the Parliament.

The trilogue on Copyright finally ended not long after, at the end of February, and many allied European NGOs launched their full campaign against the Copyright directive, mainly around the Pledge2019 platform which enabled everyone to contact MEPs, and of which La Quadrature du Net was part from the initial launch.

A few weeks before, as the various campaigns against both texts were getting ready, we and other fellow european NGOs were thinking for some time about doing a unique campaign against both texts. One can easily understand why: both texts relied on automated filtering measures and gave the key role to the major actors of the Web. This strategy was even more appealing as it gave us some flexibility since we didn’t know the order in which the texts would be voted.

At the end of February, the idea of a unique campaign was abandoned, for various motives, from the hope that a victory against the Copyright directive would be enough to convince MEPs to correct the worst measures of the terrorism regulation (a strategy we didn’t find convincing as the two subjects were too different), to the fear that the campaign against the Copyright directive would gather less momentum if she was mixed with the inevitably much more serious and fearful arguments to invoke against the terrorism regulation.

At La Quadrature du Net, we got sadly used to speak about deaths and massacres for years (which is far from being enjoyable), as they were systematically and shamefully being used by our government to justify the illegitimate strengthening of its powers. Other country’s NGOs may not be as used to it as we are (and that’s good for the mental health of their members). So we kept on fighting against the terrorism regulation and let the major part of our allies focus on the Copyright directive, with as much help as we could provide them (which, there’s no way around it, would have been more important had the two texts not being voted at the same time).

However, this splitting of our forces between the two texts, as regrettable as inevitable, maybe was not so detrimental compared to another scandalous problem: the main European media, leading stakeholders in favor of the Copyright directive and allied with the cultural industry to this end, largely renounced to their impartiality ideal. As they flooded the public with tons of erroneous and biased news on the directive, lost in their crazy political campaign, they almost completely “forgot” to speak about the terrorism regulation. This is a perfectly unprecedented situation for the press on such an important text, which the people discovered only through activist groups and specialized media. Media’s responsibility in this loss of liberties, caused by the Copyright directive as well as the terrorism regulation, is immense.

In the end, the final vote on the Copyright direct was set on March 26, just before the LIBE vote on the terrorism regulation, delayed until April 8. At a very short majority, the Parliament sanctified the economical mass surveillance as a funding source for our culture, using automated filtering tools invented by Youtube with its Content-ID (read our latest call to oppose against the Copyright directive and our reaction [FR] to its adoption).

The fight against the directive will now go on at a national level, with various ongoing legislative proposals in France, which we will discuss very soon.

The adopted regulation

On April 8th, the LIBE Committee adopted its version of the terrorism regulation (read our reaction). The vote made in plenary assembly for a first reading by the european Parliament was set less than ten days later, on Wednesday April 17. In this very short delay, we barely had time to extend our tool helping to contact MEPs (initially planned for the 61 deputes member of the Committee, we extended it to all 751 of them), and to send a letter to all MEPs to warn them about the dangers of the text (read our letter).

All this was vain. Because as we could assess by ourselves directly from Strasbourg, the text was adopted in a few minutes in first reading, during one of the last Parliament sessions before the elections in May.

The adopted version is, except one small difference, the exact same version from the LIBE commission.

What does it precisely change compared to the proposal from the European Commission in September ?

The scope is slightly modified. After a strong lobbying from the “cloud” industries, the regulation now only apply to providers that store and provide content” to the public” (and no more “to third parties”) and provide for an explicit exception for “cloud infrastructure services” and for “cloud service providers”.

The scariest thing is that this new version keeps, in its article 4, the possibility for a platform to be forced to take down in only one hour a content reported as terrorist by authorities. Several amendments proposed to remove this one-hour delay which was erected as a symbol by Daniel Dalton. The amendment for its removal proposed by the ecologists was rejected by only 3 votes (297 in favor and 300 against – see the result of the votes).

Result of the votes for removing the one-hour deletion delay

Nevertheless, two points have to be made: the first is that only the authority of the State in which the services provider has its main company can now emit such orders; the second is that if it’s the first order received by the provider, the authority must contact it at least 12 hours before the order. But these precisions do not change much: be it one or thirteen hours, those delays are way too short for most Web actors which will simply not be able to comply with them.

Regarding the competent authority entitled to emit such orders, it is still not always a judge. The text now precises that it can be “a judiciary authority or functionally independent administrative authority”. But the notion of a “functionally independent” authority can be interpreted broadly by the Member States and does not guarantees in any way the designation of an entity juridically distinct from the government (thus, it is possible for the OCLTCIC, the French national police office who is already granted in France with the power to order the blocking of Internet websites advertising terrorism, to be considered as “functionally independent”);

Article 5, enabling authorities to notify a content to a platform directly is completely removed. While this is a victory, it is mostly symbolic because, in practice, the States and Europol will still notify contents, which is neither banned nor limited by the new version of the regulation.

Article 6 is deeply modified, it does not mention “proactive” measures anymore but now mentions “specific measures”, and service providers now only have the “possibility” to put them in action, while it was a principle obligation before. Above all, the text specifies that the competent authority cannot enforce a “general surveillance obligation, neither automated tools”. This is probably the most important victory gained on this text, even if, in practice, the platforms will have no other choice but to use automated filtering tools in upstream to avoid in downstream irrealist one-hour deletion orders.

Finally, article 9 states that plaforms must systematically put in place human oversight and verifications, of the appropriateness of the decision to remove or deny access to content”. Such provision, written so broadly, could turn out to be very usefull to limit the use of automated filtering by online platforms, provided it is interpreted as requiring a human intervention for filtering a priori (when content is uploaded online) and not only for a posteriori moderation – as the main risk of this text is to enforce filters a priori.

In the end, besides some additions and modifications softening a few problematic points, the adoption of this text by the european Parliament remains a landslide defeat. In general, the text keeps on promoting the dangerous idea that censoring online content can be a solution against the spread of deadly ideologies.

Above all, by keeping the posssibility for an administrative authority to order any Web actor to remove a content within one hour (under the threat of heavy sanctions) the european institutions will force these actors to adopt a large interpretation of the notion of “terrorist content” and to use in upstream automated filtering tools.

Thus, the fact of forbidding authorities from enforcing the use of these tools does not change much. Only the web giants will be able to respect these obligations, hence the fear of an even more centralised Web in the hands of a few companies.

Finally, the improvemens brought to the text by the LIBE Committee (deletion of article 5, softening of article 6, one-hour delay extended in some caes, automated filtering limitation) will most certainly be strongly critisised in trilogue by the European Commission and the most committed member States (including France). Therefore, in no way can we claim to have won for those small progresses.

What now?

The adopted text will be the position of the european Parliament for the trilogue negociations coming in the new mandate with the european Commssion and the Concil. This will take place before another final vote by all the new MEPs next year.

Consequently, the fight is far from lost, especially considering the number of votes needed to rejected the one-hour delay (3!) and the improvements (even symbolic) that we were able to push besides the almost impossible delays and coditions in which the debate took place. The text may still be modified in second reading, however this will require another mobilisation in a few months, during the trilogues and the Parliament debates. If we want the text to be rejected, or at least to remove most of its content, this mobilisation will need the involvement of all European NGOs and a much more important commitment of the media on this topic, in circumstances we will have to hope in our favour.

This is the only way we can hope to push back mass and automated censorship which is at the heart of this text.

Before we go on with another fight, let’s take time to thank all those who took part in this campaign and helped us gain these first victories : all NGOs (French and European) who campaigned, at our side or on their own, to defeat this text and its worst measures (including the 61 organisations which signed the open letter to Emmanuel Macron), all the people who shared and helped to share our articles and communications, all those who contacted MEPs (and those who came to do it with us, in our offices) and those who helped with the translation of our campaign (in English and in Spanish). Thanks! <3

See you in a few months to continue this fight !