Fight against algorithmic social control

Whether you’re retired, a student, unemployed, a parent or a health insurance beneficiary: whatever your situation, your most intimate data is fed to dozens of algorithms whose job it is to assess your integrity.

Through the exploitation of our most intimate data (health data, family life, professional situation…), these algorithms continuously compare us to lists of “profile-types” of suspects : lazy unemployed, ill-intentioned precarious worker, single mother, retired person fond of travelling, lying disabled person, dishonest sick person.

Developed in total secrecy in the name of the “fight against fraud” by the social security authorities – the Family branch of the french welfare system (CAF), the French Unemployment Agency (Pole Emploi), the French Health Insurance (Assurance maladie), the French public pension system (Assurance vieillesse), the French Social Security for farmers (Sécurité Sociale Agricole)… – each of these algorithms assigns us a risk-score – or “suspicion score” – which is used to select which of us should be subject to checks.

We are witnessing the advent of a true “liberal” version of the social credit system. Far from the idea of an authoritarian regime assigning a single score to each citizen, on which all our interactions with administrations would be based, the control’s logics at work are more refined and pernicious.

Read more

Documentation of these practices is made and complicated by the refusal of administrations to provide any information whatsoever which we interpret as a sign of widespread embarrassment. We are fighting against this opacity and will publish information here as and when it becomes available.

At La Quadrature du Net, we refuse to let our social system be transformed into a gigantic real-time surveillance system. We refuse to allow the huge number of data held by social administrations, and initially collected to ensure their proper functioning, to be misused for social control purposes. We refuse to let the computerization of the world be synonymous with dehumanizing rationalization through the race for “efficiency”, the reduction of our lives to a handful of numbers, and the constant sorting and comparing of individuals.

We invite you to join us in the fight against the unlimited extension of police logic within our social administrations. We need help mapping the digital surveillance practices of our administrations, legal advice and your feedback!

How do welfare authorities keep tabs on us?

Join the fight against algorithmic social control!

Ask for your scoreMapping algorithmsSupport La Quadrature

Welfare

The Caisse Nationale d’Allocations Familiales (CNAF), the Family branch of the french welfare system, was the first administration to implement fraud risk-scoring algorithm. Each month, this algorithm analyzes the personal data of over 30 million French people, and assigns each recipient a “suspicion score” to determine which households will be subject to checks..

See our dedicated page

After a long legal battle based on FOIAs, we have obtained the source code of the algorithm used by CAF. The CAF has succeeded in combining dystopian practices with discrimination against the most disadvantaged. Its algorithm deliberately targets the disadvantaged and results in massive over-control of poor people, disabled or women raising a child alone.

Health

The Caisse nationale d’assurance maladie (CNAM), the Health branch of the french welfare system, is one of the most advanced administrations in the field of algorithmic control. It has developed an arsenal of risk-scoring algorithms targeting both insured persons and healthcare professionals.

Using a gigantic database covering health data of more than 69 million people, the development of such practices within the CNAM is particularly worrying.

Read more

As early as 2010, it experimented with an algorithm for scoring low income beneficiaries. As early as 2014, it also set out to develop tools for detecting “atypical profiles” of healthcare professionals. Nurses and doctors are now considered “suspects” from whom the Assurance Maladie must protect itself.

Its digital tracking practices do not stop at the use of profiling algorithms. However, the exact extent of the development of algorithmic surveillance by the Assurance Maladie is still difficult to measure as its managers are opposed to the minimum of transparency, as various exchanges we’ve had with them attest.

We have initiated various legal actions and recourses to better document them. We will detail all our research on a dedicated page very soon.

Pension

The Caisse Nationale d’Assurance Vieillesse (CNAV), the French public pension system, manages the pensions of nearly 15 million people in France. It has not escaped the injunctions to “rationalize” and digitally track down our elders.

While the development of profiling algorithms at CNAV seems to be a lower priority than at CNAF or CNAM, since 2016 CNAV has had an algorithm responsible for sorting out good and bad pensioners.

Read more

The CNAV is, however, at the forefront of other topics related to digital surveillance. It is particularly involved in the development of the use of facial recognition for remote verification of the “existence” of insured persons. This technique, authorized since July 2023, was pushed through the stigmatization of pensioners living in the Maghreb and suspected of continuing to receive their pension after their death…

We’ll be coming back to you in due course with any information we gather on the subject.


Unemployed

The use of datamining by Pôle Emploi (PE), the French Unemployment Angecy, to monitor the unemployed began in 2014. Inspired by the CAF experience, PE sought to use algorithms to detect unemployment insurance fraud. While the first experiment did not seem to convince its leaders, new experiments took place in the early 2020s.

Read more

While Pôle Emploi has assured us that no algorithms are currently being used for job search control purposes – the algorithms would focus on detecting “scams” such as identity theft or the production of false documents – we are awaiting further documentation.

Ask your score!

Have you recently undergone a CAF, Pôle Emploi, URSSAF or Health Insurance check, and would like to understand the criteria used to select you? Or are you simply curious to know whether you are trustworthy in the eyes of the State and its administrations?

Administrations are legally obliged to inform you of the “suspicion scores” they have allocated to you, along with explanations of how they are calculated.

We will put online soon sample emails to send to the various administrations. By sending these mails, you can help us both to document the (bad) practices of administrations and to put pressure on their leaders! In the meantime, you can contact us at algos@laquadrature.net !

Mapping algorithms

The first step in the fight against these algorithms is…. to know that they exist! Mapping social control algorithms is a particularly long process.

The information available is limited and scattered left and right. And the heads of the main government departments are doing everything in their power to oppose any request for information about them. That’s why we need your help. We will put online soon a guide to help you find your way around, and some ideas for action to help us document government surveillance practices.

Support La Quadrature du Net

For years, La Quadrature has been fighting against the surveillance and censorship imposed by states and corporations. While the fronts are multiplying, our means remain the same. So that we can continue to lead new battles, such as the fight against the use of suspicion algorithms in administrations, we need your support. To find out more about our main battles in 2024, visit the support page.Make a donation