CAF: Generalized rating of beneficiaries

The CAF – the family branch of the french welfare system – is the first social administration to have developed a risk-scoring algorithm in France. A kind of dystopian tool responsible for analyzing our behavior in near-real time in search of “suspicious behavior”, this algorithm assigns a “suspicion score” to each recipient. When this score deteriorates too much, a check is triggered.

Added to this is the fact that this algorithm deliberately targets the most precarious. Being poor, receiving minimum social benefits, being unemployed or living in an underprivileged neighborhood: such parameters are « risk factors » impacting a recipient’s grade and increasing the likelihood of being controlled.

Alerted about this algorithm in 2022 by the Stop Contrôles and Changer de Cap collectives, we have been working ever since to denounce it while shedding light on its iniquitous operation.

Read our introductory article on this topic

Read our article on the source code of this algorithm

32 million lives under surveillance

Every month, this algorithm analyzes the personal data of more than 32 million French people living in a household receiving a CAF benefit.

The scores are updated on the first of every month to keep pace with the slightest change in our lives. They are calculated from a selection of variables chosen from among the thousands of data held by the CAF on each recipient and his or her family (children, spouse).

Read more

If this algorithm evolves over time, here’s an example of the data it processes to distinguish “good” from “bad” recipients: family structure (situation of children, birth, separation, death of spouse…), professional life (changes in salary, loss of job, long-term unemployment) or even interactions with the CAF (reception appointments, frequency of connection to the caf.fr space…).

The intrusion of this algorithm into our lives knows no bounds. All this with the approval of the CNIL – the french data protection office -, which sees no problem in CAF spying on us digitally to better sort and classify us. And control us.

Targeting of the most precarious

Like any kind of surveillance, this one doesn’t have the same consequences for everyone. It reflects the structural inequalities in our society. For example, “suspicion scores” are strongly correlated with social status.

Variables with a negative impact on this score include having a low income, being unemployed, relying on social minima, living in an underprivileged neighborhood, spending a significant proportion of income on rent, not having a job or a stable income.

Read more

The algorithm also deliberately targets people with disabilities. Receiving the Allocation Adulte Handicapé (AAH) – a benefit for the disabled – while working is one of the parameters that most strongly impacts a recipient’s score.

Dramatic human consequences

These practices are all the more revolting because the human consequences can be very serious, as Lucie Inland recounts. Psychological distress, loss of housing, depression: inspections leave their mark on people’s lives.

Read more

Claims for reimbursement of undue payments can represent an unbearable burden for people in financial difficulty, particularly when they are due to errors or oversights that cover a long period of time. Added to this is the fact that overpayments can be recovered through deductions from all social security benefits.

Worse still, the many testimonies gathered by the Défenseur des Droits and the Stop contrôle and Changer de Cap collectives point to numerous illegal practices on the part of the CAF (failure to respect the adversarial process, difficulty in appealing, abusive suspension of aid, failure to provide the investigation report, lack of access to findings) and abusive re-qualifications of situations of involuntary error as fraud.

But digital technology has also profoundly altered control itself, which is now focused on the analysis of recipients’ personal data. The right of access given to controllers has become tentacular. Access to bank accounts, data held by energy suppliers, telephone operators, employers, shopkeepers and, of course, other institutions (Unemployment office, the tax authorities, national social security funds, etc.): the inspection process has been transformed into a veritable digital strip-down.

The inspection then becomes a humiliation session in which everyone has to justify every detail of their lives, as testified by this claimant: “The interview […] with the CAF agent was a humiliation. He had my bank accounts in front of him and was going through every line. Did I really need an Internet subscription? What had I spent the 20 euros I’d drawn in cash on?”.

A technical alibi

Legitimized in the name of the “fight against fraud”, the algorithm has actually been trained to predict undue payments, also known as overpayments. However, the latter are concentrated on recipients of minimum social benefits, people in unstable situations or single-parent families. This concentration can be explained by the fact that these benefits are governed by complex rules – the fruit of successive policies to “combat assistance” – multiplying the risk of possible errors.

Read more

This state of affairs has always been known to CAF managers. Here’s what a CAF anti-fraud manager wrote several years before the introduction of the algorithm: “In reality, it’s the social benefits themselves that generate the risk. […] this is all the more true for benefits linked to precariousness […], which are highly dependent on the family, financial and professional situation of recipients.”. Since then, no one within the CAF management could be unaware of the discriminatory consequences of using the algorithm.

And yet, CAF managers have been taking refuge behind pseudo scientific neutrality. A former director went so far as to write that the “algorithm is neutral” and even “the opposite of discrimination” since “no one can explain why a file is targeted”.

Lifting the veil of secrecy: source code and technical analysis

Faced with such bad faith, we battled for many months to obtain the source code – the formula – of the algorithm used by CAF. Our aim is to remove any doubt as to the reality of CAF’s practices, so that the obvious becomes clear to everyone. And for CAF’s managers to take responsibility for implementing a deliberately discriminatory policy of generalized surveillance.

Read more

We publish here the source code of the two versions of the algorithm respectively used between 2010-2014 and 2014-2018. The source code is accompanied by a detailed technical analysis addressing in particular the construction of the profile-types used in the article presenting it, as well as their limitations.

To go further

Here are a few reference documents on the algorithm and its political history.

Read more

– Vincent Dubois, Morgane Paris and Pierre-Edouard Weil, 2016. “Politique de contrôle et lutte contre la fraude dans la branche Famille” available here.
– Vincent Dubois, 2021. “Controlling the assisted. Genesis and use of a watchword”. On the over-control of the most precarious populations, see chapter 10. On the political history of the “fight against welfare”, and the major role played in France by Nicolas Sarkozy, see chapter 2. On the evolution of control policies, their centralization following the introduction of algorithms, and the definition of targets, see pages 177 and 258. On the contestation of national targeting plans by local CAF directors, see page 250.
– Pierre Collinet, 2013. “Le datamining dans les caf: une réalité, des perspectives”. In particular, the author explains that training the algorithm mobilizes a database containing over 1,000 pieces of information per recipient. The final model, after training and selection of the most “interesting” variables, is based on a few dozen variables. This also explains the fact that the algorithm is trained to detect undue payments and not cases of fraud.
– CNIL, 2010. See the CNIL opinion describing the algorithm as a “tool for detecting correlations in recipients’ files between high-risk files (typical fraudulent behavior)”, available here. This positive opinion is, moreover, vertiginously devoid of any criticism, either of the project’s substance and the risks of discrimination it entails, or of the misappropriation of the purposes of recipient data initially collected for the needs of the social state. Overall, it merely recommends that the database be encrypted.
– Défenseur des droits, 2017. Report “Lutte contre la fraude au prestations sociales” available here.

Support La Quadrature du Net

La Quadrature has been fighting for years against surveillance and censorship imposed by states and corporations. While the fronts are multiplying, our means remain the same. So that we can continue to lead new battles, such as the fight against the use of algorithms of suspicion in administrations, we need your support. To find out more about our main 2024 battles, visit the support page.

Make a donation