Police racism: Net’s giants are pretending to stop facial recognition

Posted on


In a time where the police institution is being called into question, the multinational security companies are trying to redeem there image through publicity stunts: they would be stoping the usage of facial recognition because the technology is supposedly not fully developed and the police could misuse it.

Stopping facial recognition?

Several companies have announced theire intent to (temporarily or not) stop using facial recognition. IBM was the first to take the lead: the company publicly stated that it would stop selling facial recognition for “general use” and called on the US Congress to establish a “national dialogue” on the issue. On the same day, a Google employee denounced the biases of the algorithms and the dangerousness of the technology. The next day, Amazon claims to ban police use of its facial recognition software “Rekognition” for a year. Finally, Microsoft said it wants to stop selling such services until there is a more precise legislative framework.

It is therefore the very same companies behind the invention and development of facial recognition that are denouncing it. Cheeky.

As Privacy International shows, IBM was one of the first companies to embark on facial recognition and to make it its business. It has invented the very term “Smart City” to sell more and more video surveillance systems around the world. And, we will come back to this later, facial recognition is only a fraction of the algorithms sold by IBM. According to the British association, IBM is taking the lead to avoid being accused of racism, as was the case when it emerged that it was selling its technology to the Nazis during the Second World War. Moreover, it may be a way for the company to withdraw from a competitive market where it is not a leader, as in Toulouse, where the town hall of J.-L. Moudenc has signed a contract with IBM to equip some thirty VSA cameras, without apparent success.

Through these statements, these companies try to guide our answers to the classic question that arises for many new technologies: is facial recognition bad in itself or is it just misused? The answer to this question from companies that make a profit from these technologies is always the same: the tools would be neither good nor bad, it is how they are used that matters.

Thus, an announcement that resembles a push back on the deployment of facial recognition is in fact a validation of it’s usefulness. Once the State has established a clear framework for the use of facial recognition, companies will be free to deploy and sell their tools.

Using anti-racism fights to rebuild one’s image

Whether Google, Microsoft or Amazon, digital giants using artificial intelligence have already been pinpointed for the racist or sexist bias of their algorithms. These revelations, in a context where more and more people are revolting against racism, particularly against the police force in the United States, are increasingly affecting the image of these companies already known for having little respect for our rights. But the digital giants have more than one trick up their sleeve and master the codes of communication and carefully manage their public image. The debate around algorithmic biases is focusing attention, while at the same time issues regarding personal data are being ignored.

By announcing to put a stop in the use of their facial recognition tools as long as they have racist biases, Microsoft, IBM and Google are doing double duty: they are restoring their image by pretending to be anti-racist when they have been doing business on these tools for years and publicly imposing a debate in which a total ban on facial recognition is not even considered. Amazon goes even further, claiming to prohibit the government from using Rekognition, its facial recognition software. So the company’s message is clear: it is not facial recognition that is dangerous, but the State.

This about-face by the digital giants is a reminder of their enormous political weight: after years of convincing public authorities that these technological tools should be used to maintain order, they have the luxury of denouncing their use to restore their image. This is the case of IBM, which has been promoting facial recognition since at least 2012, by providing the city of Atlanta, in the United States, with a predictive policing programme with photos of high-risk criminals. And in 2020, it is denouncing its use by the police, for fear that it will fall on them.

Facial recognition, a scarecrow that makes automated CCTV acceptable

What is completely overlooked in these announcements is the fact that facial recognition is just one tool among many others in the surveillance arsenal developed by the digital giants. Because it touches the face, the most emblematic of biometric data, all the light is shined upon it, but it is only the tip of the iceberg. At the same time, “automated video surveillance” tools are being developed (recognition of clothing, procedures, behaviour, etc.) which are discreetly installed in our streets, public transport and schools. By agreeing to make facial recognition a tool to take for what it’s worth, the GAFAMs seek to make automated video surveillance acceptable, and seek to make the debate on its prohibition impossible.

However, it is fundamental to affirm our collective rejection of these surveillance tools. Under the guise of “decision-making assistance”, these companies and their public partners set up and regulate public spaces. When a company decides that it is an abnormal behaviour to not move for 5 minutes or when we know that we are being scrutinised by the cameras to find out whether we are wearing a mask or not, then we change our behaviour to enter into the norm, defined by these companies.

The big security companies are not defenders of freedoms or anti-racist counter-powers. They are afraid of being associated today in the denunciation of police abuses and structural racism. Their advertisements correspond to a well-understood commercial strategy: to keep a low profile on certain tools in order to better continue selling their other surveillance technologies. By masquerading as “moral and ethical multinationals” – if such a thing may exist at all – these companies take very little risk. Let us not be fooled: neither IBM nor Microsoft, let alone Amazon, will stop facial recognition. These companies are just stalling, just like Google, which, in response to employee denunciations, had suggested in 2018 that the company would keep its distance from the Pentagon, but has since resumed its collaboration with the American army. In doing so, they are killing two birds with one stone: by focusing the debate on facial recognition, the digital giants are not only trying to restore their image but are also leaving the door wide open to deploy other equally intrusive artificial intelligence-based surveillance devices.