Authoritarian Censorship details

Why the forthcoming European Regulation must be rejected

In December 2015, following several terrorist attacks in Europe, the European Commission >announced the creation of the “EU Internet Forum”. It brought together “EU Interior Ministers, high-level representatives of major internet companies“, mainly “Facebook, Google, Microsoft and Twitter“, “Europol, the EU Counter Terrorism Co-ordinator and the European Parliament“. The purpose of this new Forum was to find solutions “to protect the public from the spread of terrorist material“.

Later in Spring 2017, the European Commission explained that it had been “working over the last two years with key internet platforms including under the EU Internet Forum to ensure the voluntary removal of online terrorist content”. It underlined that “the internet industry-led initiative to create a ‘database of hashes’ ensures that once terrorist material is taken down on one platform, it is not uploaded on another platform”. Then, “the aim is that internet platforms do more, notably to step up the automated detection of terrorist content, to share related technology and tools with smaller companies, and to make full use of the ‘database of hashes’“.

How? On December 2017, the Commission explained: “The database of known terrorist content (“the database of hashes”) (…) is now fully operational and has so far gathered over 40,000 hashes of known terrorist videos and images“. “Our most pressing goal right now is to scale up our efforts so that all internet companies take part – making sure that terrorist content on the internet, no matter where, no longer stands a chance“. “This is why the Forum has prioritised outreach to and engagement with new and small companies“.

In the same statement, Facebook explained: “Today, 99% of the ISIS and Al Qaeda-related terror content we remove from Facebook is content we detect before anyone in our community has flagged it to us, and in some cases, before it goes live on the site. We do this primarily through the use of automated systems like photo and video matching and text-based machine learning. Once we are aware of a piece of terror content, we remove 83% of subsequently uploaded copies within one hour of upload.

At this point, the situation was clear: Google and Facebook have worked with the European Commission for two years to develop tools supposedly able to detect terrorist content within one hour and to fill a huge blacklist. The next step was to make “small companies” use these tools.

A new Regulation

On 12 September, in order to fulfill this goal and pushed by French and German governments, the European Commission published a proposal for a Regulation “on preventing the dissemination of terrorist content online”.

It requires all Internet actors (websites, blogs and videos hosters, forum and social media, email and messaging providers), big or small, European or not, to “take proactive measures to protect their services against the dissemination of terrorist content” (article 6).

If an actor fails to “protect” its service (because it had not used the tools developed by the EU Internet Forum, typically), any European national authority may order it to remove a content regarded as “terrorist” by this authority. This national authority may be the police, acting without judicial authorisation.

Once a removal order has been sent, the actor must remove the content within one hour (article 4). Why one hour? Probably because using Facebook’s tools would have presumably detect and remove the content within one hour, even before the police orders it to.

Furthermore, in order to answer police’s calls, any actor must have a point of contact that can be reached 24/7 (article 14 and Recital 33).

Finally, if an actor is not efficient enough, the national authority may force it to implement specific measures, including monitoring all contents in order to actively search for those related to terrorism (Article 6 and Recitals 16 and 19). These specific measures may be the very same tools developed by Facebook and Google within the Internet Forum.

Each Member State will decide of the penalties applicable to breaches of these obligations. In case of a “systematic” failure to comply, the penalty can go up to 4% of the global turnover.

Automated censorship

Hosting service providers will have to automatically filter the content they receive. Either as a “proactive measure”, or to avoid removal orders with unrealistic deadlines, choosing to preventatively filter anything that closely resembles terrorist content. This will lead to the over-blocking of licit content that is useful for public debate, something we are already seeing.

The automated filtering is not an acceptable solution: human behaviour must only be assessed by humans. This is neither a realistic solution: the so-called “automated filtering” rests on relocating content moderation to swarms of low wages employees working in stressful environments, in order to compensate machines that are inevitably flawed.

The end of the decentralised Web

From a technical, economical and human perspective, only a handful of providers will be able to comply with these rigorous obligations – mostly the Web giants.

To escape heavy sanctions, the other actors (economic or not) will have no other choice but to close down their hosting services.

The rich, broad and decentralised Web will disappear. The domination of the giants will be sanctified.

Delegation of State powers

Private censorship will be reinforced, weakening the role of the judge who alone should determine what content to censor. The delegation to private actors of the monitoring of our discussions is new and has been, until today, forbidden by European law (article 15 of the Directive 2000/31).

Our governments are giving in to the temptation of delegating their police powers to a few giants, making them all mighty, and destroying a huge part of the European economy and encouraging businesses that are taking advantage of our personal data.

This delegation is making the State blind on these illicit activities that should be known. The State will not be able to monitor some terrorist activities, blocked by default by others.

A useless censorship

The Impact Assessment of the European Commission trying to justify the Regulation does not explain, in 146 pages, the exact consequence of the dissemination of terrorist content on an alleged radicalisation. Neither do our governments. This fantasised fear is nonetheless the main justification of this Regulation.

The role of Internet in terrorist radicalisation is nowadays questioned by experts reports. The terrorists that recently took action were not radicalised on the Internet. For the The International Centre for the Study of Radicalisation and Political Violence, the role of the Internet is unrealistic and greatly exaggerated.