EU AI Act will fail commitment to ban biometric mass surveillance

Posted on

Press release of the Reclaim Your Face coalition.

On 8 December 2023, EU lawmakers celebrated reaching a deal on the long-awaited Artificial Intelligence (AI) Act. Lead Parliamentarians reassured their colleagues that they had preserved strong protections for human rights, including ruling out biometric mass surveillance (BMS).

Yet despite the lawmakers’ bravado, the AI Act will not ban the vast majority of dangerous BMS practices. Instead, it will introduce – for the first time in the EU – conditions on how to use these systems. Members of the European Parliament (MEPs) and EU Member State ministers will vote on whether they accept the final deal in spring 2024.

The EU is making history – for the wrong reasons

The Reclaim Your Face coalition has long argued that BMS practices are error-prone and risky by design, and have no place in a democratic society. Police and public authorities already have so much information about each of us at their fingertips; they do not need to be able to identify and profile us all of the time, objectifying our faces and bodies at the push of a button.

Yet despite a strong negotiating position from the European Parliament which called to ban most BMS practices, very little survived the AI Act negotiations. Under pressure from law enforcement representatives, the Parliament were cornered into accepting only weak limitations on intrusive BMS practices.

One of the few biometrics safeguards which had apparently survived the negotiations – a restriction on the use of retrospective facial recognition – has since been gutted in subsequent so-called ‘technical’ discussions.

Despite promises the from Spanish representatives in charge of the negotiations that nothing substantive would change after 8 December, this watering-down of protections against retrospective facial recognition is a further letdown in our fight against a BMS society.

What’s in the deal?

Based on what we have seen of the final text, the AI Act is set to be a missed opportunity to protect civil liberties. Our rights to attend a protest, to access reproductive healthcare, or even to sit on a bench could still be jeopardised by pervasive biometric surveillance. Restrictions on the use of live and retrospective facial recognition in the AI Act will be minimal, and will not apply to private companies or administrative authorities.

We are also disappointed that when it comes to so-called ‘emotion recognition’ and biometric categorisation practices, only very limited use cases are banned in the final text, with huge loopholes.

This means that the AI Act will permit many forms of emotion recognition – such as police using AI systems to predict who is or is not telling the truth – despite these systems lacking any credible scientific basis. If adopted in this form, the AI Act will legitimise a practice that throughout history has been linked to eugenics.

Police categorising people in CCTV feeds on the basis of their skin colour is also set to be allowed. It’s hard to see how this could be permissible given that EU law prohibits discrimination – but apparently, when done by a machine, the legislators consider this to be acceptable.

Yet at least one thing had stood out for positive reasons after the final negotiation: the deal would limit post (retrospective) public facial recognition to the pursuit of serious cross-border crimes. Whilst the Reclaim Your Face campaign had called for even stronger rules on this, it would nevertheless have been a significant improvement from the current situation, where EU member states use retrospective facial recognition with abandon.

This would have been a win for the Parliament amid so much ground given up on biometrics. Yet in the time since the final negotiation, pressure from member state governments has seen the Parliament forced to agree to delete the limitation to serious cross-border crimes and to weaken the remaining safeguards [paywalled]. Now, just a vague link to the “threat” of a crime could be enough to justify the use of retrospective facial recognition of public spaces.

Reportedly, the country leading the charge to steamroller our right to be protected from abuses of our biometric data is France. Ahead of the Paris Olympics and Paralympics later this year, France has fought to preserve or expand the powers of the state to eradicate our anonymity in public spaces, and to use opaque and unreliable AI systems to claim to know what we are thinking. The other Member State governments, and the Parliament’s lead negotiators, have failed to stop them.

Under this new law, we will be guilty by algorithm until proven innocent – and the EU will have rubber-stamped biometric mass surveillance. This will give carte blanche to EU countries to roll out more surveillance of our faces and bodies, which in turn will set a chilling global precedent.