AI Regulation: The EU should not give in to the surveillance industry lobbies

Posted on


Although it claims to protect our liberties, the EU’s draft text on artificial intelligence (AI), presented by Margrethe Vestager, actually promotes the accelerated development of all aspects of AI, in particular for security purposes. Loaded with exceptions, resting on a stale risk-based approach, and picking up the French government’s rhetoric on the need for more experimentation, this text should be modified down to its foundation. In its current state it risks endangering the slim legal protections that European law holds out in face of the massive deployment of surveillance techniques in public space.

On April 21, 2021 the European Commission (EC) published a regulation proposal for a “European approach” to AI, accompanied by a coordinating plan to guide member states’ action for the years to come.

Beyond the rethoric of the European Commission, the draft regulation is deeply insufficient in how it treats the danger that AI systems represent for fundamental freedoms. Behind the EC’s proposed heroic narrative lurks a sneaky attack on European data protection law, by challenging the principles established by the GDPR and the Law Enforcement directive.

Accelerating the deployment of AI by law enforcement autorities

Far from suspending the whole AI systems that obviously violate European law (like facial recognition systems, of which we speak here), in French), article 5 of this proposal limits itself first to prohibiting four specific “uses” while providing national authorities with broad exemptions.

This sham prohibition, in a nutshell, covers AI techniques that are “subliminal”; that take advantage of people’s vulnerability to “change their behavior substantially”; AI for “social credit”; and biometric identification through AI. In reality, these prohibitions are so narrow [why only these four areas and not more?] and so poorly defined as almost to give the impression that the EC’s aim is to authorize the widest possible measures rather than really to prohibit any (on this subject, see in particular EDRi’s complete analysis of the draft regulation).

The example of biometric identification is particularly revealing (Recital 23 of the text tells us that this part, by the way, is a “lex specialis”, i.e., that it supercedes existing law on the matter of biometric data). Biometric identification systems are prohibited for “real time” use for repressive ends except, notably, to find “potential specific victims of criminality” or the “prevention of specific substantial, imminent threats on the lives of natural persons” or the “prevention of terrorist attack”… One understands that with such broad exceptions, this “prohibition” is actually an authorisation, and not at all a prohibition of facial recognition.

Echoing the rhetoric of the security industry

This draft, among other things, writes into the regulation a distinction long desired by the lobbies of the security industry: the distinction between biometric surveillance in “real time” and ex post use. But what difference is there between mass facial recognition applied immediately and that applied a few hours later?

No matter what, for the regulation’s drafters, “real time” surveillance is set out as prohibited whilst a posteriori surveillance is authorised in principle (article 6). This kind of distinction serves above all to reassure certain European police forces (France in particular) that are already massively using facial recognition (see our article about French police facial recognition use here).

The reuse of the security industry’s arguments goes further, as shown by the exceptions for biometric surveillance in real time. The use of facial recognition searching for « specific potential victims of criminality » such as « missing children » as well as its use to prevent terrorist attacks have been specifically requested by pro-security politicians and industry for years.

Authorisation in principle

While the former versions of this proposed regulation aimed to ban IA systems allowing mass surveillance of individuals (people were speaking of a « moratorium » in 2020), the text finally endorsed by the College of Commissioners largely sidesteps the issue of indiscriminate surveillance, suggesting that the EU executive has once again bowed to the security agenda of European governments.

This admission of failure appears likewise in the European Commission’s choice of technologies that it deems not worthy of being forbidden, but simply considers to be “highly risky”. These include, for example, technologies for lie detection, analysis of emotions, predictive policing, border surveillance… Moreover, details of these “highly risky” technologies, as well as some of the obligations to which they are supposed to meet, are not detailed in the body of the text but in annexes which the European Commission gives itself permission to modify unilaterally.

These technologies are therefore not forbidden, but actually authorised in principle, and subject to merely supplementary requirements (articles 6 et seq1In particular see article 8: “Very risky AI systems respect the requirements established in the present chapter”).

Impact assesment rather than prohibitions

The safeguards proposed to regulate these technologies are generally inadequate to guarantee effective control of the uses, simply because most of these systems are merely subject to a system of self-certification. If this approach, based on risk analysis, is supposed to reassure the private sector, it fails entirely to guarantee that AI suppliers will respect and protect the human rights of individuals (cf
the analysis of Access Now).

The European Commission wants to establish an “impact assessment law”: any technology may be deployed if the responsible person has, themself, done a “pre-evaluation” of whether the measure conforms to European law. But these impact assessment will not limit the deployment of Technopolice. Manufacturers and interest groups are used to performing such analyses, which suit their purposes quite well. This happened with facial recognition in Nice where the Mayor’s office sent its analysis to the CNIL a few days before deploying it.

The Commission has therefore made a qualitative leap in its efforts toward a “better law” by anticipating and satisfying, before even opening negotiations, the lobbying campaigns of tech giants in the years to come.

Less appeal for citizens, more dehumanisation

It is equally troubling to see that nothing in the bill offers recourse or redress for citizens with respect to the deployment of these systems; the bill focuses mainly on the relationships among suppliers and their clients. The few requirements for transparency never take civil society into account but only address (unspecified) “national authorities”. Once again, this is almost reality in France: a great proportion of the experimentation mentioned in Technopolice (cf the roadmap) has been communicated with the CNIL – which has nonetheless rendered none of this information public. It is up to civil society once again to demand that these exchanges be made public so they can be subject to criticism. We have no hope for progress in this regard.

Even so, taking account of civil society should lie at the heart of the European approach to AI, as reminded by dozens of human rights organisations.

Experiment in order to normalise

Another proof that this bill does more to authorise that to prevent industrial AI can be found in article 53 of the bill, which seeks to force governments to develop “regulatory sandboxes for AI”. The underlying idea: create an environment “which facilitates the development, trial, and validation of AI systems”; in other words, relieve industry – notably the security sector – from the serious constraints laid down by our rights and freedoms to make it easier for them to experiment.

One needs only to read the more than enthusiastic href=”https://www.lemonde.fr/idees/article/2021/08/03/le-projet-de-reglement-europeen-sur-l-intelligence-artificielle-encourage-l-ethique-des-affaires_6090372_3232.html”>reaction of a former representative of Thalès and Atos, Jean-Baptiste Siproudhis, to this proposal to suspect something isn’t right. Seeing him speak of businesses which “will become tomorrow a main source of direct inspiration for new norms” to make the law “a cycle of progress”, can only make us worried about the submission of legislators to the whims of the industry.

Moreover the situation could become even worse: several member states now want a separate law for law enforcement with, we imagine, even looser limitations and even wider exceptions.

Far from opening up “the path to ethical technology in the entire world” in the words of the Commission’s Vice President Margrethe Vestager, this plan consolidates a political agenda dictated by the security industry where introducing AI is necessary and inevitable for entire sectors of society. It rests on a naïve and fantasised vision of these technologies and the companies that supply them.

References

References
1 In particular see article 8: “Very risky AI systems respect the requirements established in the present chapter”