D. Dalton, Rapporteur on Terrorism Regulation, about to enable Mass Censorship

Posted on


Daniel Dalton is a Member of the European Parliament (Conservative Party, part of the far-right ECR political group) and, as the text’s rapporteur, he is in charge of leading the debate on the Anti-Terrorism Censorship Regulation (read our article on this Regulation). Being from the UK, he will be leaving the European Parliament in a few months but, before then, he intends to let the European Union outsource censorship on the Internet to Google and Facebook, destroy small and medium Internet actors and let the police order removal of illicit content within one hour, without judicial authorization.

Last Wednesday, he published his “draft report” – the baseline from which the European Parliament may amend the authoritarian Regulation proposed by the European Commission last September, and hastened by France and Germany. This draft report does not suggest many changes from the initial Regulation. Maybe Mr Dalton simply does not care about this Regulation, as he is about to leave us to our fate anyway.

Or does he?

During the past months, Dalton has been rather vocal against the automated filtering obligations which will result from article 13 of the Copyright Directive.

He clearly stated that “The big players such as Google and Amazon can cope with many of the obligations imposed on them by the JURI text on Article 13 in terms of seeking out millions of licensees from around the world and concluding agreements with them and putting in place sophisticated monitoring software, small one-person part-time businesses may not be able to. Much legitimate content would be automatically prevented from being uploaded by users by the content monitoring software that websites would be required to install. Many users would inevitably not appeal such decisions, an unacceptable stifling of freedom of speech and something which would fundamentally change the current nature of the internet“. So why would he do the opposite right now?

Here are two hypotheses.

First option: he was never honest to begin with and just works for Google. Google is against article 13? So is Dalton. Google will significantly benefit form this anti-terrorism Regulation, as it will destroy potential competitors and put the company in the center of the whole Web’s moderation system? So Dalton defends the text. It’s highly cynical hypothesis and we’d rather not believe in it.

Second option: Dalton is honest but misinformed. He has not read the public letters signed by dozens of Internet actors nor the analysis we published alongside so many other NGOs (EDRi or Mozilla for instance). This seems credible. The only change Dalton suggests in his draft report is that the Anti-Terrorism Regulation should not lead to automatic filtering. This is consistent with his opposition to the Copyright Directive. But he has not yet realised that, despite his suggestion, the core of the Regulation may only lead to some forms of automated censorship.

If Internet actors are forced to have a point of contact available 24/7 to answer police’s orders to remove content within one hour (an element which still exists in the draft report), what does Dalton think will happen? Very few actors are capable of removing content on such short notice. And, in any case, nobody will calmly wait around the clock for the police to call. Most actors will simply outsource this role to other companies. Which one might you ask? One just needs to read the communications published jointly by the European Commission, Google and Facebook to know: these companies claim to be able to automatically detect and remove alleged terrorist content within one hour and the Commission explains that smaller actors should now use their tools1In 2017, the European Commission proudly announced it had been « working over the last two years with key internet platforms including under the EU Internet Forum”, mainly Google, Facebook, Twitter and Microsoft since 2015, “to ensure the voluntary removal of online terrorist content”, notably thanks to “the internet industry-led initiative to create a ‘database of hashes’ ensures that once terrorist material is taken down on one platform, it is not uploaded on another platform”.
Already, “the aim is that internet platforms do more, notably to step up the automated detection of terrorist content, to share related technology and tools with smaller companies, and to make full use of the ‘database of hashes’.
.

It’s quite simple: the very purpose of this Regulation is to force all Internet actors to use the tools developed by Google and Facebook for the last two years within the European Internet Forum. The European Commission has actually never tried to hide this fact. If Dalton is honestly against automated filtering and not working on behalf of Google, he should simply revoke his draft report and ask for the full rejection of the Regulation – which is not amendable or saveable in any way.

A first vote on the draft report may be scheduled for 21 March. La Quadrature du Net will be visiting MEPs in Brussels this week and we will update you then on how things look from the ground.

Mr Dalton’s statements don’t appear to match his actions. If he truly opposes automated filtering, as he rightly should, then why does he only oppose it when it stems from the Copyright Directive but not the Anti-Terrorism Censorship Regulation? If you are as intrigued by Mr Dalton’s inconsistent behaviour as we are, don’t hesitate to ask him directly.” suggests Martin Drago, legal analyst for La Quadrature du Net.

References

References
1 In 2017, the European Commission proudly announced it had been « working over the last two years with key internet platforms including under the EU Internet Forum”, mainly Google, Facebook, Twitter and Microsoft since 2015, “to ensure the voluntary removal of online terrorist content”, notably thanks to “the internet industry-led initiative to create a ‘database of hashes’ ensures that once terrorist material is taken down on one platform, it is not uploaded on another platform”.
Already, “the aim is that internet platforms do more, notably to step up the automated detection of terrorist content, to share related technology and tools with smaller companies, and to make full use of the ‘database of hashes’.