Copyright Directive: Let’s Fight Automated Filtering… and Web Centralisation!

Posted on


12 June 2018 – On 20 June, the European Parliament will make its decision regarding the Copyright Directive, symbol of a new era of Internet regulation. La Quadrature is calling on you to call the Members of European Parliament and demand they act against automated censorship in the name of copyright protection and, more broadly, against centralisation of the Web.

To understand the complex ruling which will take place on 20 June, we first need to revisit the basics of the regulation of content distributed over the Internet.

An Uncertain Equilibrium

The eCommerce Directive, adobted by the European Union in 2000 laid down the foundations for content regulation on Internet. It created the following principle: if you host and distribute text, images, or videos provided by third parties and if this content is illegal (because of copyrigt infrigment, sexist or racist content, apology of terrorism, etc.), you are not accountable. This lack of accountability however requires two things: that you did not have an active role in the distribution of the content (organising the content to promote some of it, for example) and that, if a content is reported as “illicit”, you remove it “promptly”. If you fail to respect those conditions, you can be considered as the publisher of the content.

This equilibrium, of a particularly uncertain nature – evaluating “illicit” and “promptly” has proven to be difficult –, has applied indiscriminately to all hosters for twenty years. Those who rent us server space to host and distribute our websites (the French company OVH is a good example); forums and wikis where we share our experiences and knowledge; the non-centralized networks such as Mastodon or PeerTube; and, of course, the giants of the Web – Facebook, YouTube and others who seems to have locked in their hands the majority of our public exchanges.

This equilibrium progressively fell appart as these giants gave up on the idea of nemaining neutral: assuming nowadays their active role, they prioritize all the content that they distribute according to economic criteria (highlighting the advertisements of those who pay them, as well as the content that will make us stay longer on their platforms), or political criteria (Facebook for example has a strong censorship policy towards nudity).

In theory, as we have seen, this generalized filtering of the public debate – which often rests on automatically applying economic and political criteria – should make them accountable for all the content they distribute. In practice, however, one can understand why this principle isn’t really applied: it would encourage these giants to censor all potentially illegal speech, drasticly limiting the capacity of self expression of the millions of people who still use, more or less willingly, these platforms to engage in the public debate. The law thus appears here to be imperfect: too strict in theory to be put into practice.

Automatic Filtering

In September 2016, to address part of this problem, the European Commission proposed a “Copyright directive”.

Article 13 of this text aims to create new rules for big content providers, those who distribute “large amounts of works”. These providers would have to enter into agreements with the copyright holders of the works they distribute, in order to define how to share the revenue (from advertisment or subscriptions) with them or take measures to prevent the distribution of content signaled by copyright holders.
The text mentions “effective content recognition technologies”, clearly refering to Content ID which has been deployed on Youtube for 10 years – a tool that enables Google to automatically identify the works published on their site so their rightsholders can prevent or allow their distribution (and if so, in return of remuneration).

The Commission’s proposals must face the same criticism than those made to YouTube for years: they delegate the subtleties of regulation on automated tools, presented as miracle solutions. Ignoring all the subtleties of human behaviour, these tools censor anything and everything depending on technical bugs, miscalibrated criteria and absurd rationales, and prevent at the same time the legitimate exercise of copyright exceptions (right to quote, parody, …).

To comprehend the importance of this debate, one should understand that automated filtering is already widely deployed and praised by Facebook or Google, beyond the matter of copyright, to fight against any type of illegal content, or so they claim. Many governments are also tempted to follow this path.

The Copyright directive must not legitimate and generalise this technological solutionism that automates our social relationships and treats humans as mere machines controlled by a few private companies. Instead, the current debate should be an opportunity to limit how the giants of the Web use it and to deny the control they hold onto our world.

A New Distinction

On 25 May, the European Union’s Member States declared their position on the Copyright Directive. This position includes a crucial amendment which clearly creates a new category of actors, i.e any person who, while hosting and distributing a large number of works, organises and promotes them for profit – that is, they have an “active role” in their presentation.

This new category is being implemented to circumvent the protection offered by the “eCommerce” directive of 2000, all the while not being threatened by systematic liability regime. Thus, it is an intermediate category, between “all” and “nothing”, which could potentially solve numerous problems that have arisen in the past twenty years or so. Let us call “platforms” this new category of actors. While the term is both generic and vague, practice and official discourses seem to coalesce around it.

The position of the Members States is that these platforms should be responsible for the works they distribute without the authorization of rightsholders, if the platforms did not set up a system which – proportionately to their capacities – could have prevented their distribution. The idea of automated filtering such as “Content ID” is not set aside but its use is less explicitely suggested.

For its part, the European Parliament will declare its position on 20 June, through the committee on Legal Affairs (JURI). After long debates, the rapporteur of the directive published proposals), as well as add a new precision: the censorship operated by the platforms should not lead to filtering content that does not breach a copyright nor should it lead to the deployment of a generalised oversight of content uploaded. The difference made with generalised automated filtering is slowly appearing more clearly. However, it is crucial that it appears explicitely in the final text, as well as a a clear threshold (of works distributed, subscribed users, etc.) in order to be certain that the mechanism will specifically deal with the issue of centralisation.

Finally, Axel Voss’s proposals specify some guarantees against abusive or arbitrary censorship: a mechanism to quickly challenge the platform’s decision, as well as the possibility to bring the matter before a judge in order to enforce copyright exceptions that would make the filtering unjustified. One should however go a lot further, as asking of Internet users to request a ruling from a judge to enforce their rights is too burdensome, given the unbalance between the interested parties. It would be better to turn the rules around in case a removal request is challenged: censored content should come back online when a user considers to be within their rights and it would be up to rightsholders to take things to court to obtain a final “stay down” notice, under the control of a judge.

A Disapointing Compromise

Rather than legitimise the automated regulation model on which is the basis of the power of giants of the Web, these proposals could begin to oversee it and limit its effects. But let’s not rejoice too quickly: automated regulation, rather than being weakly overseen should be dismantled and banned, which isn’t on the cards for now. Furthermore, the European Parliament’s decision is still in the making, and could fall back onto the technological solutionism that was the at heart of so many recent decisions.

La Quadrature du Net calls on you to call the MEPs until 20 June included to demand:
– that new obligations regarding copyright only affect hosters who organise content with a for-profit goal and who reach a certain clearly determined threshold;
– that these new obligations never turn into automated filtering, which must be clearly forbidden;
– that the responsibility to take matters to court to enforce one’s rights in the case of a removal request fall onto rightsholders and not Internet users.

If the text settles on this compromise, the worst might be avoided, but this Copyright directive will still be a failure, as once again the debate focused on repressive and retrograde measures when it was initially intended to provide thought on the balance needed in copyright in the digital age. This ambition was abandonned with the European Parliament’s rejection of the proposals, which were already lighter than La Quadrature du Net’s proposals for copyright reform.

A New Balance

As said, this debate goes far beyond copyright. It concerns the overall regulation of hosters, in the fight against “fake news”, against the spread of hate, against terrorist propaganda, etc., which we see discussed more and more. It concerns the way each and everyone of us can take part in the public debate, to express ourselves as much as to access information.

All these challenges have a common enemy: centralisation of the Web, which has locked the vast majority of Internet users under one-way, rigid rules that don’t care much for the quality, the tranquility or relevance of our exchanges, and only exist for a few companies’ search for profit.
One of the main causes of this centralisation is the legal difficulty that has long made the existence of its cure, non-centralised hosters, precarious. Not supporting themselves financially by mas surveillance and regulation, they can’t take the risk of expensive lawsuits in case of a failure to “promptly” remove each “unlawful” content they wouldd be informed of. Hosters which, most of the time, can barely take the risk of existing.

The necessary condition for the development of such services is for the law to, at last, stop imposing on them rules which for twenty years have been thought out for nothing but a few giants. Creating a new category for them offers the hope of freeing the non-centralised Web from the absurd legal framework into which judges and lawmakers have slowly trapped it in.