Predictive Policing in France: Against opacity and discrimination, the need for a ban

Posted on


After several months of investigation, as part of a European initiative coordinated by British NGO Fair Trials, La Quadrature has released a report on the state of predictive policing in France. In light of the information gathered, and given the dangers these systems carry when they incorporate socio-demographic data as a basis for their recommendations, we call for their ban.

After documenting back in 2017 the arrival of so-called predictive policing systems, and then being confronted with the lack of up-to-date information and real public debate, we sought to investigate them in more detail. For this report, we have therefore compiled the data available on several predictive policing software systems formerly or currently in use within French police forces. These include:

  • RTM (Risk Terrain Modelling), a “situational prevention” software program used by the Paris Police Prefecture to target intervention zones based on “environmental” data (presence of schools, shops, metro stations, etc.);
  • PredVol, a software developed in 2015 within the government agency Etalab, tested in Val d’Oise in 2016 to assess the risk of car thefts, abandoned in 2017 or 2018;
  • PAVED, a software developed from 2017 by the Gendarmerie and trialed from 2018 in various departements to assess the risk of car thefts or burglaries. In 2019, shortly before its planned nationwide rollout, the project was “paused”;
  • M-Pulse, previously named Big Data of Public Tranquility, developed by the city of Marseille in partnership with the company Engie Solutions to assess the suitability of municipal police deployments in urban public space;
  • Smart Police, an application that include a “predictive” module and that is developed by French startup Edicia which, according to its website, has sold this software suite to over 350 municipal forces.

Dangerous technologies, without supervision or evaluation

Here we summarize the main criticisms of the systems studied, most of which use artificial intelligence techniques.

Correlation is not causation

The first danger associated with these systems, itself amplified by the lack of transparency, is the fact that they extrapolate results from statistical correlations between the different data sources they aggregate. Indeed, out of bad faith or ideological laziness, the developers of these technologies maintain a grave confusion between correlation and causation (or at least refuse to make the distinction between the two). Yet these confusions are reflected in very concrete ways in the design and functionalities of the applications and software used in the field by police officers, but also in their consequences for women residents exposed to increased policing.

When using these decision-support systems, the police should therefore at the very least strive to demonstrate the explanatory relevance of using specific socio-demographic variables in their predictive models (i.e. go beyond simple correlations to trace the structural causes of delinquency, which could lead to considering actual remedies rather than mere securitarian policies). This would imply, first and foremost, being transparent about these variables, which is far from being the case.

In the case of PAVED, for example, the predictive model is said to use fifteen socio-demographic variables which, according to the developers, are strongly correlated with crime. However, there is no transparency about the nature of these variables, let alone any attempt to demonstrate a true cause-and-effect relationship. The same is generally true of the variables used by Smart Police, Edicia’s software, although in this case we have even less visibility on the exact nature of the variables used by the system.

Potentially discriminatory variables

It is likely that, just like the algorithms used by the Caisse nationale des allocations familiales (CNAF) we recently uncovered, some of the socio-demographic variables mobilized are discriminatory. Indeed, risk scores are possibly correlated with a high rate of unemployment or poverty, or a high rate of people born outside the European Union in the neighborhood under consideration. This is because in a system like PAVED, we know that among the data used for establishing “predictions” are nationality and immigration data, household income and composition, and level of education. All of these variables are likely to lead to the targeting of the most precarious populations and those most exposed to structural racism.

Criminological false beliefs

Another danger associated with these systems, itself amplified by the lack of transparency, lies in the fact that they entrench decried criminological doctrines. The promoters of predictive policing refuse to build a general understanding of deviant behavior and illegalisms: they make no mention of the policies of exclusion, discrimination, and the social violence of public policies.

But when they venture in proposing explanatory models, and attempt to fit these models into their scoring algorithms, developers seem to rely on “knowledge” whose relevance is perfectly dubious. Some doctrinal allusions appear, for example, in research articles by PAVED’s main developer, Gendarmerie Colonel Patrick Perrot. They contain basic assumptions about crime (for example, crime as a “constantly evolving phenomenon”), alluding to “weak signals” and other “warning signs” of delinquency that echo broken windows” theories, the scientific basis of which is widely questioned. Similarly, in the case of Edicia, the predictive module seems to be based on the idea that delinquency has a geographical spillover effect (or “contagion” effect), and also incorporates postulates “brought up” from the “field” which claim that “petty delinquency leads to major delinquency”.

These inane doctrines serve above all to mask the disastrous consequences of neoliberal policies, to criminalize everyday incivilities and must be interpreted as the key element in an attempt to criminalize the poor. They are now incorporated into the automatic systems that the police grant itself, making them harder to decipher.

A risk of self-reinforcement

The criticism is widely known, but it deserves to be reiterated: predictive policing software raises a major risk of feedback loops and self-reinforcing effect, therefore leading to a demultiplication of police domination on specific neighborhoods (surveillance, identity checks, uses of coercive powers).

In fact, their use necessarily leads to the over-representation of geographical areas defined as high-risk in the learning data. As soon as a significant number of patrols are sent to a given area in response to the algorithm’s recommendations, they will be led to observe offenses – even minor ones – and to collect data relating to this area, which will in turn be taken into account by the software and contribute to reinforcing the probability that this same area will be perceived as “at risk”. Predictive policing thus produces a self-fulfilling prophecy by concentrating significant resources in areas already plagued by discrimination and over-policing.

Possible abuses of power

While we have not found any information on the specific instructions given to police officers when patrolling in areas deemed high-risk by predictive systems, one source told us that, thanks to PAVED, the gendarmerie was able to obtain authorization from the public prosecutor for officers on patrol to position themselves in transit areas so as to stop passing vehicles. This involved checking drivers’ license plates and driving licenses, and in some cases carrying out vehicle searches.

If the information proves accurate, it would mean that preventive checks, carried out under authorization from the Public Prosecutor’s Office, were decided on the sole basis of a technology founded on dubious doctrines and whose effectiveness has never been assessed. A situation which, in itself, would materialize a characterized disproportion of the restrictive measures of freedom taken against the people subjected to those stops and searches.

Technologies of dubious effectiveness

With regard to their discriminatory nature, even if these predictive policing systems proved effective from the point of view of police rationality, they would pose significant problems in terms of social justice and respect for human rights. Yet, despite the absence of any official evaluation, available data points to the absence of added value of predictive models in achieving the objectives the police had set themselves.

In fact, these tools seem far from having convinced their users. PredVol was no better than simple human deduction. As for PAVED, although it may have prevented a few car thefts, it proved disappointing in terms of predictive capabilities, and did not translate into an increased number of flagrant arrests, which remains the standard of efficiency for the police under the reign of the “policy of numbers”2. Despite initial plans, PAVED was never implemented within the Gendarmerie Nationale. Following an experimental phase from 2017 to 2019, it was decided to shelve the software. And while M-Pulse has found a new lease of life under the “citizen rebranding” pushed by Marseille’s new center-Left municipal majority, its police uses seem relatively marginal.

For what reasons? The opacity surrounding these experiments makes it impossible to say with any certainty, but the most likely hypothesis lies both in the absence of any real added value in relation to existing knowledge and beliefs within police forces, and in the organizational and technical complexity associated with the use and maintenance of these systems.

Important shortcomings in the handling of personal data

For those opposing these systems, the information presented in our report might seem reassuring. But in reality, even if the fad surrounding “predictive policing” seems to have passed, R&D around decision support systems in the context of policing goes on unabated. In France, substantial sums of money are being spent to meet the stated ambition of “taking the Ministry of the Interior to the technological frontier”, as envisioned in the 2020 Internal Security White Paper1. In the context of a primacy given to techno-securitarian approaches, PAVED could thus be reactivated or replaced by other systems in the near future. As for Edicia, in recent months the company has been considering incorporating new sources of data from social networks into its predictive module, as envisaged by the designers of M-Pulse at the start of the project. Predictive policing is thus still in order.

Interrogated via a FOIA request in March 2022 and again in November 2023, the CNIL, the French data protection authority, told us that it had never received or produced any document relating to predictive policing software as part of its prerogatives. It suggests that the agency had never taken any interest in such automated decision-making systems as part of its oversight powers. In and of itself, this begs important questions when considering that, for some of them, they are used by thousands of municipal police officers across France.

Finally, insofar as the administrative police powers exercised in areas deemed “at risk” by predictive systems can be legally considered as “individual administrative decisions”, the requirements set out by the French Constitutional Council in its case law on “public” algorithms should be respected2. In particular, these prohibit the use of “sensitive data”, and impose the possibility of administrative appeals for data subjects. Added to this are transparency obligations imposed by law, notably the 2016 law known as the “Digital Republic”3.

These legislative requirements as well as European case-law do not seem to be met when it comes to predictive policing systems. Not only is there no significant, proactive attempt to inform citizens and other stakeholders about exactly how these systems work, apart from the occasional bits of information opportunistically share by the police or other governmental agencies. Worse still, the right to freedom of information that we have exercised via our FOIA requests to learn more about them only delivered very partial information. More often than not, these requests were met with the absence of a response, particularly from the Ministry of the Interior.

It’s urgent to ban predictive policing

Predictive policing systems are hardly in the news anymore. And yet, despite a blatant lack of evaluation, legislative oversight and poor operational results, the promoters of these technologies continue to entertain the belief that “artificial intelligence” will be able to make the police more “efficient”. From our point of view, what these systems produce is, above all, an automation of social injustice and of police violence, and an even greater dehumanization of relations between the police and the population.

In this context, it is urgent to put a stop to the use of these technologies and then conduct a rigorous evaluation of their implementation, effects and dangers. The state of our knowledge leads us to believe that such transparency will prove their ineptitude and dangers, and provide further evidence of the need to ban them.

Help us

To compensate for the opacity deliberately maintained by the designers of these systems and by the public authorities who use them, if you have at your disposal documents or elements enabling a better understanding of their operation and effects, we invite you to share them on our anonymous document sharing platform. You can also send them to us by post to the following address: 115 rue de Ménilmontant, 75020 Paris.

Finally, please don’t hesitate to point out any factual or analytical errors you may find in our report by writing to us at contact@technopolice.fr. And to support this type of research in the future, please also feel free to make a donation to La Quadrature du Net.

Read the full report


  1. The White Paper proposed to devote 1% of GDP to internal security missions by 2030, representing an expected increase of around 30% in the Ministry’s budget over the decade. Ministère de l’intérieur, « Livre blanc de la sécurité intérieure » (Paris : Gouvernement français, 16 novembre 2020), https://www.interieur.gouv.fr/Actualites/L-actu-du-Ministere/Livre-blanc-de-la-securite-interieure. ↩︎
  2. See the decision on the transposition of the RGPD (Decision n° 2018-765 DC of June 12, 2018) and the decision on Parcoursup (Decision n° 2020-834 QPC of April 3, 2020). ↩︎
  3. On the legal obligations of transparency of public algorithms, see : Loup Cellard, « Les demandes citoyennes de transparence au sujet des algorithmes publics », Note de recherche (Paris : Mission Etalab, 1 juillet 2019), http://www.loupcellard.com/wp-content/uploads/2019/07/cellard_note_algo_public.pdf. ↩︎