Family Branch of the French Welfare System: technology in the service of exclusion and harassment of the most vulnerable

Posted on


Translation from a paper originally published in French in October 19th, 2022 

For the past year or so, we have been fighting the effects of the digitialization of French public administrations and the use of scoring algorithms for the purpose of social control, as part of the Stop Contrôles collective1The “Stop Contrôles” collective can be contacted at the following address: stop.controles@protonmail.com for stories or current problems with CAF or Pôle emploi controls, but also to find collective ways to oppose them. . Having first covered the situation in the French unemployment agency,Pôle Emploi, we are now taking a look at the Caisses d’Allocations Familiales (CAF), the  family branch of the french welfare system. We will soon come back with new publications about this fight, as we intend to commit to it fully in the coming months.

There’s only one click between the CAF and you” is what a CAF poster proclaimed, in early 2022. The subtitle was equally inspirational: Access all services from CAF 24/7”. The empty promise of a technology supposedly allowing access to social security benefits at any time of day or night. But behind the slogans lies the reality of excessive digitalisation, a path to calculated social exclusion.

As the spread of online administrative procedures is combined with fewer and fewer walk-in services, essential to persons in precarious situations 2See the report of the French Defender of Rights “Digitization of public services: 3 years later”, available here. See also the appeal signed by 300 NGOs/collectives on the difficulties generated for people in precarious situations available <a href=https://www.defenseurdesdroits.fr/sites/default/files/atoms/files/ddd_rapport-dematerialisation-2022_20220307.pdf>here</a>., an algorithm is now responsible for predicting whichCAF beneficiaries will be considered “(un)trustworthy” and should be controlled3See the CNIL’s opinion describing the algorithm as a “tool for detecting correlations in recipients’ files between high-risk files (typical fraudulent behaviour)“, available  <a href=https://www.legifrance.gouv.fr/cnil/id/CNILTEXT000022205702>here </a>. This opinion, which is positive, is, moreover,  empty of any criticism concerning both the substance of the project and the risks of discrimination it entails, and the misuse of the recipients’ data initially collected for the needs of the social state. Overall, it merely recommends encrypting the database. . This scoring algorithm, supposed to rank the “risk” that a beneficiary may unduly benefit from social benefits, promotes an institutional policy of harassment for those most in need4Vincent Dubois, 2021. “Controlling the assisted. Genesis and use of a watchword“. On the over-control of the most precarious populations, see Chapter 10. On the political history of the “fight against assistance“, and the major role played by Nicolas Sarkozy in France, see Chapter 2. On the evolution of control policies, their centralisation following the introduction of the algorithm and the definition of targets, see pages 177 and 258. On the contestation of national targeting plans by local CAF directors, see page 250..

The algorithm of shame

Fed by extensive datasets collected by the CAF on every beneficiary5For technical details on the algorithm and its training see Pierre Collinet’s article “Le datamining dans les caf: une réalité, des perspectives“, written in 2013 and available <a href= https://www.cairn.info/revue-informations-sociales-2013-4-page-129.htm>here</a>. He explains how the training of the algorithm uses a database containing more than a thousand pieces of information per beneficiary. The final model, after training and selection of the most ‘interesting’ variables, is based on a few dozen variables. It also explains the fact that the algorithm is trained to detect undue payments and not cases of fraud., the algorithm continuously evaluates their situation to classify them and sort them using a “risk score”. This score, updated monthly, is then used by teams of CAF inspectors to select individuals submitted to invasive controls6There are three types of controls at the CAF. Automated checks are procedures for verifying recipients’ declarations (income, employment status, etc.), organised by linking administrative files (taxes, employment office, etc.). These are by far the most numerous.  Documentary checks consist in requesting additional supporting documents from the recipient. Finally, on-site checks are the least numerous but the most intrusive. Carried out by a CAF controller, they consist in an in-depth check of the recipient’s situation. The vast majority of these checks are now triggered by the algorithm following a deterioration of the recipient’s rating (see Vincent Dubois, “Contrôler les assistés”, p. 258). It should be noted that on-site checks can also be triggered by reports (police, employment centre, counsellors, etc.) or by the definition of standard targets defined either locally or nationally (RSA checks, students, etc.). These two categories represented most of the reasons for triggering controls before the algorithm was used..

The little information available about itreveals that the algorithm deliberately discriminates against people in need. The criteria that the algorithm associates with an elevated risk of abuse, thus negatively affecting a social benefits recipient’s score, include7The CAF maintains a high level of opacity regarding the criteria governing its operations. It even refuses to give more information to recipients who have been controlled following a deterioration in their score. There are no documents presenting all the parameters, and their weighting, used by the so-called “logistic regression” algorithm. The information presented here comes from the following sources: the CNIL’s <a href=https://www.legifrance.gouv.fr/cnil/id/CNILTEXT000022205702>opinion</a> on the algorithm; Vincent Dubois’ book “Contrôler les assistés”; Letter n°23 of the National Office for the Fight against Fraud available <a href=https://www.economie.gouv.fr/files/lettre_dnlf_info_23.pdf>here</a> (see pages 9 to 12); the report “Fight against fraud in social benefits” of the Defender of Rights available <<a href=https://juridique.defenseurdesdroits.fr/doc_num.php?explnum_id=16746>here</a>. Pierre Collinet’s article “Le datamining dans les caf: une réalité, des perspectives”, available <a href= https://www.cairn.info/revue-informations-sociales-2013-4-page-129.htm>here</a> , details the construction of the algorithm.

    – having a low income,

    – being unemployed or not having stable employment,

    – being a single parent (80% of single parents are women),

    – spending a disproportionate amount of income on rent,

    – having many interactions with the CAF (for those who dare to ask for help).

Other parameters like the place of residence, type of housing (social, etc.), type of contact with the CAF (phone, email, etc.) or having been born outside the EU are also used, although we can’t know exactly how they affect the score. But it is easy to imagine the fate of a foreigner living in an underprivileged neighborhood. And this is how, since 2011, the CAF has organized a veritable digital hunt for the most precarious, leading to a massive over-control of the needy, the foreign, and single women raising children8Vincent Dubois, 2021. “Controlling the assisted. Genesis and use of a watchword“. On the over-control of the most precarious populations, see Chapter 10. On the political history of the “fight against assistance“, and the major role played by Nicolas Sarkozy in France, see Chapter 2. On the evolution of control policies, their centralisation following the introduction of the algorithm and the definition of targets, see pages 177 and 258. On the contestation of national targeting plans by local CAF directors, see page 250..

Worse of all, the CAF is proud of this. Its director considers this algorithm an important part of an “unending and voluntary policy of modernising the tools to fight frauds and cheaters.” The institution and its algorithm are regularly presented at the governmental level as a model to be followed in the fight against “social welfare fraud”, a theme imposed by the entire right wing since the 2000s.

How can such a deeply discriminatory process be publicly defended by an administration that is supposed to be help those in needs? This is where the digitalisation of social control becomes especially dangerous, by providing a technological alibi to political leaders.

A technological alibi for an unfair policy

This algorithm allows the CAF to hide the social consequences of the filtering operated by its control policy. The targeting of the poorest need no longer to be mentioned in “annual control plans”. These plans now refer to “datamining targets” without describing the criteria used to calculate “risk scores”. As one CAF controller once said: “Today, it’s true that data makes things easier for us. I don’t need to say that I’m selecting 500 RSA [french minimum income] recipients. I’m not the one doing it, the system is saying it! (Laughter)”9These quotes are taken from the report “Politique de contrôle et lutte contre la fraude dans la branche Famille” published in 2016 and written by Vincent Dubois, Morgane Paris and Pierre Edouard Weil. On the extension of the right of communication, see pages 53-54.

Furthermore, the concept of “risk score” is also used to individualize the targeting process and deny its discriminatory nature. A CAF control manager thus declared to deputies that “Rather than high-risk populations, we talk about profiles of benficiaries at risk, in connection with data mining”10See the report “Fighting social benefit fraud” available <a href=https://www.carolegrandjean.fr/mission-gouvernementale-sur-la-fraude-aux-prestations-sociales/>here</a>  and, above all, the report on the hearings conducted within this framework available <a href=https://www.carolegrandjean.fr/wp-content/uploads/2019/11/Annexes-Auditions-LFPS.pdf>here</a>. In particular, on page 85, we note the transcription of the exchange with employees of the Meurthe et Moselle social services, which testifies to the difficult position in which the policies of controls place beneficiaries. On a completely different note, the first hearing is that of a self-proclaimed ‘expert in the fight against fraud’. It is particularly hard to read because of this character’s lack of humanity, but it is very instructive on the way of thinking of those who advocate social control against all odds.. In other words, the CAF argues that its algorithm does not target the poor as a social category but as individuals. However, many of the “risk factors” used to target recipients are socio-demographic criteria associated with situations of precariousness (low income, unstable professional situation, etc.). This rhetorical game is therefore statistical nonsense, as the French Defender of Rights has stated: “More than targeting on the basis of ‘presumed risks’, the practice of data mining leads to designating high-risk populations and, in so doing, leads to the idea that certain categories of users are more inclined to commit fraud”

Finally, the algorithm is employed by the CAF’s managers to relieve themselves of the responsibility of choosing the criteria used to target people. They transform this choice into a purely technical problem (predicting which beneficiaries are most likely to present irregularities), the resolution of which is the responsibility of the institution’s team of statisticians. The only thing that matters is the effectiveness of the proposed solution (the quality of the prediction). The internal workings of the algorithm (the targeting criteria) become a mere technical detail of no concern to policymakers. A director of the CAF said publicly: “We [the CAF] do not draw up a typical profile of the fraudster. With datamining, we do not draw conclusions”, simply omitting to say that the CAF delegates this task to its algorithm. 

An anticipated over-control of society’s most precarious

Our reply to CAF’s officials who deny the political nature of this algorithm is that this algorithm has only  been trained to find what they have decided to target11For technical details on the algorithm and its training see Pierre Collinet’s article “Le datamining dans les caf: une réalité, des perspectives“, written in 2013 and available here. He explains how the training of the algorithm uses a database containing more than a thousand pieces of information per beneficiary. The final model, after training and selection of the most ‘interesting’ variables, is based on a few dozen variables. It also explains the fact that the algorithm is trained to detect undue payments and not cases of fraud.. The over-control of society’s most precarious is not a coincidence, nor the unexpected result of complex statistical computations. It is the result of a political choice and they were well aware of its impact on the most vulnerable beneficiaries, long before the algorithm was deployed. The algorithm is not designed to “fight intentional fraud”, as claimed by the CAF (references here, here or here), but to prevent undue payments more broadly12It would seem that the algorithm was initially trained on proven fraud cases, but that it was quickly decided to use it to detect undue payments in the broad sense (independently of the establishment of fraudulent character). A former director of the “control and fight against fraud” department declared in 2010,  before the social affairs commission of the national assembly: “We are currently testing more sophisticated models in seventeen organizations, but based on the observation of undue payments and not on fraudulent undue payments” (see here). which are most often the result of unintentional mistakes in declaration forms13A director of the Risk Management and Anti-Fraud Department declared in the context of a government mission on social benefit fraud in 2019: “80% of our undue payments are linked to errors in resources and professional situations, mainly resources”..

The CAF knew that the risk of error is especially high for persons in precarious situations due to the complexity of the rules regulating access to their social security benefits. As early as 200614These quotes and assessments of the proportion of fraud in undue payments are taken from three articles written by a former director of the CNAF’s “control and fight against fraud” department. The first one, “Du contrôle des pauvres à la maîtrise des risques”, was published in 2006 and is available here<. The second entitled “Le contrôle de la fraude dans les CAF”, published in 2005, is available here. See also a third article entitled “The rightful payment of social benefits by CAFs” published in 2013 and available here., a previous director of the fraud prevention division of the CAF explained “undue payments are due […] to the complexity of the benefits themselves“. This is even “more true for welfare related to precariousness” (meaning minimal social benefits). He added that this was due to “many elements related to the situation of the beneficiary” being taken into account, which “fluctuated over time and are thus very instable“. Regarding single women, he confessed the “difficulty in appreciating the concept of marital status“. This difficulty is another source for errors.  

Expecting the algorithm to predict the risk of undue payments means asking the algorithm to learn to identify who, among beneficiaries, relies on minimal social benefits or is victim of a computation depending on marital status for their benefits. In other words, CAF decision-makers knew from the start of the project which beneficiaries would be profiled as  ‘high risk’ by the algorithm.

Nothing could be further from the truth than stating, like this institution did in response to criticism from the French Defender of Rights, that the “checks carried out” are “selected  by a neutral algorithm” which does not follow « any presupposition »15The second quote is from a speech by a former CNAF director to the Senate Social Affairs Committee in 2017.. Or even that « checks […] carried out through data-mining […] exclude any arbitrary decision ».

Discriminating for profitability

Why choosing to detect errors rather than fraud? Errors are more common and easier to identify than fraud. Indeed, identifying fraud requires demonstrating intent on the part of the beneficiary. Identifying errors thus maximizes the amount of money recovered from beneficiaries, thus improving the profitability of checks.

To quote a previous head of the CAF fraud prevention division: “Honestly, we the CAF cannot take the lead on these really big fraud schemes as the stakes are too high for us, in a way“. She also later mentioned her satisfaction that the last « management and objectives agreement» (contract between the CAF and the state that defines a set of objectives for the administration) created « a distinction between the recovery rates of fraud and non-fraud […] because the efficiency is higher for non-fraud recoveries which are, by definition,lower. » 

This algorithm only serves to make the checks carried out by the CAF more profitable in order to feed CAF’s public communication16As is the case here, where it is written “For 1€ spent, the work of a controller yields 8 times more”.. The harassment of society’s most precarious by administrations is turned into proof of their « good management ».

Dehumanisation and digital exposure

However digital technology has also deeply modified the checks themselves.Theyare now focused on analysing the personal data of recipients.CAF inspectors have access to a tentacular amount of personal data:  bank accounts, data held by energy suppliers, phone operators, employers, shopkeepers and, of course, data held by other administrations (Pôle emploi – French unemployment administration –, the tax authorities, national social security funds, etc.). In turn checks have turned into a veritable digital strip-down.

These thousands of digital traces are used to feed a control system where the burden of proof is reversed. Much more than classical interviews, personal data now forms the basis of inspectors’ decision-making. As one CAF inspector said:“Before, the interview was very important. […] Now the control of information before the interview is much more important”17These quotes are taken from the report “Politique de contrôle et lutte contre la fraude dans la branche Famille” published in 2016 and written by Vincent Dubois, Morgane Paris and Pierre Edouard Weil. On the extension of the right of communication, see pages 53-54.. Another example is that “when a controller prepares his file, just by checking partners’ databases, before meeting the recipient, he has a very good idea of what he will be able to find”. 

Refusing to submit to such transparency is forbidden and can lead to the suspension of benefits. There is no such thing as a “digital right to silence”: opposition to full transparency is considered obstruction. And for the most reluctant, the CAF reserves the right to request this information directly from the outside parties that hold it. 

The inspection then becomes a humiliation session in which each person has to accept to justify the smallest detail of their life, as this recipient testifies: ‘The interview […] with the CAF agent was a humiliation. He had my bank accounts in front of him and went through every line. Did I really need an Internet subscription? What had I spent the 20 euros I had drawn in cash on?”18See Lucie Inland’s article available here and the Human Rights Ombudsperson’s report “The fight against social benefit fraud”  The Abbé Pierre Foundation, the Human Rights Ombudsperson and the Changer de Cap collective have also collected numerous testimonies describing the violence experienced by recipients during controls. Difficulty in appealing, repeated checks, automatic suspension of social benefits, unprecedented intrusion into the smallest corners of their private life. We invite you to read all these testimonies here..

The score attributed by the algorithm in particular serves as proof of guilt. Contrary to what the CAF would have us believe, by repeating to any attentive ear that the algorithm is only a “decision-making tool”, a lower risk score generates suspicion and severity during checks. It is up to the claimant to answer for the algorithmic decision, to prove the algorithm wrong. This influence of algorithmic scoring on the inspection teams, a fact recognized and referred to as “automation bias”, is explained even better here by a controller as reported by Vincent Dubois: “Given the fact that we are going to control a highly scored situation, some people told me that, well, there is a sort of – even unconsciously – obligation not to achieve results, but to say to themselves: if I am here, it means there is something there, so I have to find it”

Tragic human consequences

These practices are made all the more revolting by how serious their human consequences can be. Psychological distress, loss of housing, depression : the checks leave significant marks on the lives of all those controlled. As one director of social action explains19See the report “Fighting social benefit fraud” available here and, above all, the report on the hearings conducted within this framework available here. In particular, on page 85, we note the transcription of the exchange with employees of the Meurthe et Moselle social services, which testifies to the difficult position in which the policies of controls place beneficiaries. On a completely different note, the first hearing is that of a self-proclaimed ‘expert in the fight against fraud’. It is particularly hard to read because of this character’s lack of humanity, but it is very instructive on the way of thinking of those who advocate social control against all odds.: “You have to realise that undue payments are almost worse than not being paid”. He added: “You are in a mechanism of recovery of undue payments and administrations that can also decide to cut off all access to social benefits for a period of six months. You really find yourself in a dark situation. In other words, you have made a mistake but you are paying a high price for it, and this is where an extremely serious deterioration begins, from which it is very difficult to recover“. 

Claims for reimbursement of undue payments can be an unbearable burden for people in financial difficulty, especially when they are due to errors or omissions over a long period of time. This is combined with the fact that overpayments can be recovered through deductions from all social benefits. 

Worse, the numerous testimonies collected by the French Defender of Rights and the Stop contrôle and Changer de Cap collectives report numerous illegal practices by the CAF (failure to respect the adversarial process, difficulty in appealing, abusive suspension of aid, failure to provide the investigation report, lack of access to the findings) and abusive reclassification of situations of involuntary error as fraud. These abusive classifications then lead to the registration of recipients identified as fraudsters, which in turn reinforces their stigmatization during future interactions with the CAF and the consequences of which may extend beyond the CAF if this information is transferred to other administrations.

Digitization, bureaucracy and social inspection

Of course, digital technologies are not the root cause of the CAF’s practices. As the ‘social’ side of the digital control of public space by the police that we are documenting in our Technopolice campaign, they are the reflection of policies centred around the logic of sorting, surveillance and generalised administration of our lives

Furthermore, the practice of scoring we denounce at the CAF is not specific to this institution only a pioneer: the CAF was the first social services administration to implement such an algorithm. It later became the “bon élève” (or “top of the class”) to use the expression of an right-wing MP, which should inspire all other administrations. Nowadays, frenchadministrations responsible for unemployment, health insurance, the national pension plan, and even the tax administration, under the impulse of the “Cour des comptes” (“Court of Accounts”, national authority tasked with controlling public accounts) and the national delegation against fraud20On the use of nationality as a risk factor, see the report “Lutte contre la fraude aux prestations sociales” (Fighting social benefit fraud) by the french Defender of Rights. It quotes an internal CAF circular (No. 2012-142 of 31 August 2012) recommending, among other things, that “people born outside the European Union should be targeted”. The role of the DNLF in the development of scoring tools is also mentioned., are also working on developing their own scoring algorithm.

At a time when, to quote the researcher Vincent Dubois, our social system tends to “reduce unconditionally attributed social rights to favour specific support conditioned on individual situations“, thus logically inducing more controls, it is legitimate to question projects aiming at automating social welfare, such as the “solidarity at sourceproposed by the French President. This automation can only be introduced at the cost of an ever increasing scrutiny of the population, and will require the introduction of digital infrastructures that, in turn, will bestow further power to the state and its administration.

Keep fighting

In response to all of this, we demand that the CAF stops using a scoring algorithm. The search for undue payments, the majority of which are no higher than a few hundred euros, cannot justify such practices which, essentially, lead to pushing precarious people in situations of extreme distress.

To answer the remarks of a CAF director, who declared he could not “answer precisely on the bias” that his algorithm may contain — thus implying that the algorithm could be improved — we believe that the problem is not technical but political21The ‘biases’ of algorithms are often put forward as a simple technical problem, as in the case of facial recognition algorithms that better recognise white people. The problem with this criticism, which is very real, is that it sometimes sidesteps the political aspect of algorithms by reducing the problem to technical considerations that could be corrected one day. This algorithm is interesting from this point of view because it was trained ‘according to the rules of the art’, see the references above, from a database resulting from random checks. There is therefore no prior sampling bias, as in the case of facial recognition algorithms. That being said, the algorithm repeats the human biases linked to the controls carried out on these randomly selected files (severity with people on minimum social benefits, difficulty in identifying complex frauds, etc.). But above all, as explained in the article, it reflects the complexity of the rules for accessing social benefits, which is a purely political issue that the algorithm merely reveals.. This algorithm cannot exist without the introduction of discriminatory control practices. Consequently, the full ranking algorithm has to be abandoned.

We will soon publish more information on the actions we are planning to fight, at our level, against those policies. Until then, we will continue to document the usage of those scoring algorithms in all French administrations. We invite those who want and who can, to organize and mobilize locally, similarly to the Technopolice campaign animated by La Quadrature. In Paris you may find us and discuss this struggle around general assemblies of the Stop Control collective. We relay its press releases through our website.

This struggle would highly profit from exchanges with those who, either at the CAF or elsewhere, have information on this algorithm (details on criteria used, internal dissensions triggered by its implementation …) and would like to help us fight against such methods. We encourage those individuals to contact us at contact@laquadrature.net. You may also anonymously report documents on our SecureDrop (see the usage manual here).

To conclude, we wish to denounce the constant police monitoring of the Stop Contrôles committee: phone contacts emanating from the intelligence service, reference to the committee activities in front of some of its members in the scope of other struggles, and over-appearance of police during simple “flyering” operations in front of CAF agencies. These police operations aim to intimidate and repress a legitimate and necessary social contestation.

References

References
1 The “Stop Contrôles” collective can be contacted at the following address: stop.controles@protonmail.com for stories or current problems with CAF or Pôle emploi controls, but also to find collective ways to oppose them.
2 See the report of the French Defender of Rights “Digitization of public services: 3 years later”, available here. See also the appeal signed by 300 NGOs/collectives on the difficulties generated for people in precarious situations available <a href=https://www.defenseurdesdroits.fr/sites/default/files/atoms/files/ddd_rapport-dematerialisation-2022_20220307.pdf>here</a>.
3 See the CNIL’s opinion describing the algorithm as a “tool for detecting correlations in recipients’ files between high-risk files (typical fraudulent behaviour)“, available  <a href=https://www.legifrance.gouv.fr/cnil/id/CNILTEXT000022205702>here </a>. This opinion, which is positive, is, moreover,  empty of any criticism concerning both the substance of the project and the risks of discrimination it entails, and the misuse of the recipients’ data initially collected for the needs of the social state. Overall, it merely recommends encrypting the database. 
4, 8 Vincent Dubois, 2021. “Controlling the assisted. Genesis and use of a watchword“. On the over-control of the most precarious populations, see Chapter 10. On the political history of the “fight against assistance“, and the major role played by Nicolas Sarkozy in France, see Chapter 2. On the evolution of control policies, their centralisation following the introduction of the algorithm and the definition of targets, see pages 177 and 258. On the contestation of national targeting plans by local CAF directors, see page 250.
5 For technical details on the algorithm and its training see Pierre Collinet’s article “Le datamining dans les caf: une réalité, des perspectives“, written in 2013 and available <a href= https://www.cairn.info/revue-informations-sociales-2013-4-page-129.htm>here</a>. He explains how the training of the algorithm uses a database containing more than a thousand pieces of information per beneficiary. The final model, after training and selection of the most ‘interesting’ variables, is based on a few dozen variables. It also explains the fact that the algorithm is trained to detect undue payments and not cases of fraud.
6 There are three types of controls at the CAF. Automated checks are procedures for verifying recipients’ declarations (income, employment status, etc.), organised by linking administrative files (taxes, employment office, etc.). These are by far the most numerous.  Documentary checks consist in requesting additional supporting documents from the recipient. Finally, on-site checks are the least numerous but the most intrusive. Carried out by a CAF controller, they consist in an in-depth check of the recipient’s situation. The vast majority of these checks are now triggered by the algorithm following a deterioration of the recipient’s rating (see Vincent Dubois, “Contrôler les assistés”, p. 258). It should be noted that on-site checks can also be triggered by reports (police, employment centre, counsellors, etc.) or by the definition of standard targets defined either locally or nationally (RSA checks, students, etc.). These two categories represented most of the reasons for triggering controls before the algorithm was used.
7 The CAF maintains a high level of opacity regarding the criteria governing its operations. It even refuses to give more information to recipients who have been controlled following a deterioration in their score. There are no documents presenting all the parameters, and their weighting, used by the so-called “logistic regression” algorithm. The information presented here comes from the following sources: the CNIL’s <a href=https://www.legifrance.gouv.fr/cnil/id/CNILTEXT000022205702>opinion</a> on the algorithm; Vincent Dubois’ book “Contrôler les assistés”; Letter n°23 of the National Office for the Fight against Fraud available <a href=https://www.economie.gouv.fr/files/lettre_dnlf_info_23.pdf>here</a> (see pages 9 to 12); the report “Fight against fraud in social benefits” of the Defender of Rights available <<a href=https://juridique.defenseurdesdroits.fr/doc_num.php?explnum_id=16746>here</a>. Pierre Collinet’s article “Le datamining dans les caf: une réalité, des perspectives”, available <a href= https://www.cairn.info/revue-informations-sociales-2013-4-page-129.htm>here</a> , details the construction of the algorithm.
9, 17 These quotes are taken from the report “Politique de contrôle et lutte contre la fraude dans la branche Famille” published in 2016 and written by Vincent Dubois, Morgane Paris and Pierre Edouard Weil. On the extension of the right of communication, see pages 53-54.
10 See the report “Fighting social benefit fraud” available <a href=https://www.carolegrandjean.fr/mission-gouvernementale-sur-la-fraude-aux-prestations-sociales/>here</a>  and, above all, the report on the hearings conducted within this framework available <a href=https://www.carolegrandjean.fr/wp-content/uploads/2019/11/Annexes-Auditions-LFPS.pdf>here</a>. In particular, on page 85, we note the transcription of the exchange with employees of the Meurthe et Moselle social services, which testifies to the difficult position in which the policies of controls place beneficiaries. On a completely different note, the first hearing is that of a self-proclaimed ‘expert in the fight against fraud’. It is particularly hard to read because of this character’s lack of humanity, but it is very instructive on the way of thinking of those who advocate social control against all odds.
11 For technical details on the algorithm and its training see Pierre Collinet’s article “Le datamining dans les caf: une réalité, des perspectives“, written in 2013 and available here. He explains how the training of the algorithm uses a database containing more than a thousand pieces of information per beneficiary. The final model, after training and selection of the most ‘interesting’ variables, is based on a few dozen variables. It also explains the fact that the algorithm is trained to detect undue payments and not cases of fraud.
12 It would seem that the algorithm was initially trained on proven fraud cases, but that it was quickly decided to use it to detect undue payments in the broad sense (independently of the establishment of fraudulent character). A former director of the “control and fight against fraud” department declared in 2010,  before the social affairs commission of the national assembly: “We are currently testing more sophisticated models in seventeen organizations, but based on the observation of undue payments and not on fraudulent undue payments” (see here).
13 A director of the Risk Management and Anti-Fraud Department declared in the context of a government mission on social benefit fraud in 2019: “80% of our undue payments are linked to errors in resources and professional situations, mainly resources”.
14 These quotes and assessments of the proportion of fraud in undue payments are taken from three articles written by a former director of the CNAF’s “control and fight against fraud” department. The first one, “Du contrôle des pauvres à la maîtrise des risques”, was published in 2006 and is available here<. The second entitled “Le contrôle de la fraude dans les CAF”, published in 2005, is available here. See also a third article entitled “The rightful payment of social benefits by CAFs” published in 2013 and available here.
15 The second quote is from a speech by a former CNAF director to the Senate Social Affairs Committee in 2017.
16 As is the case here, where it is written “For 1€ spent, the work of a controller yields 8 times more”.
18 See Lucie Inland’s article available here and the Human Rights Ombudsperson’s report “The fight against social benefit fraud”  The Abbé Pierre Foundation, the Human Rights Ombudsperson and the Changer de Cap collective have also collected numerous testimonies describing the violence experienced by recipients during controls. Difficulty in appealing, repeated checks, automatic suspension of social benefits, unprecedented intrusion into the smallest corners of their private life. We invite you to read all these testimonies here.
19 See the report “Fighting social benefit fraud” available here and, above all, the report on the hearings conducted within this framework available here. In particular, on page 85, we note the transcription of the exchange with employees of the Meurthe et Moselle social services, which testifies to the difficult position in which the policies of controls place beneficiaries. On a completely different note, the first hearing is that of a self-proclaimed ‘expert in the fight against fraud’. It is particularly hard to read because of this character’s lack of humanity, but it is very instructive on the way of thinking of those who advocate social control against all odds.
20 On the use of nationality as a risk factor, see the report “Lutte contre la fraude aux prestations sociales” (Fighting social benefit fraud) by the french Defender of Rights. It quotes an internal CAF circular (No. 2012-142 of 31 August 2012) recommending, among other things, that “people born outside the European Union should be targeted”. The role of the DNLF in the development of scoring tools is also mentioned.
21 The ‘biases’ of algorithms are often put forward as a simple technical problem, as in the case of facial recognition algorithms that better recognise white people. The problem with this criticism, which is very real, is that it sometimes sidesteps the political aspect of algorithms by reducing the problem to technical considerations that could be corrected one day. This algorithm is interesting from this point of view because it was trained ‘according to the rules of the art’, see the references above, from a database resulting from random checks. There is therefore no prior sampling bias, as in the case of facial recognition algorithms. That being said, the algorithm repeats the human biases linked to the controls carried out on these randomly selected files (severity with people on minimum social benefits, difficulty in identifying complex frauds, etc.). But above all, as explained in the article, it reflects the complexity of the rules for accessing social benefits, which is a purely political issue that the algorithm merely reveals.