News




Predictive Policing: Repeating history through algorithms

Certain groups, such as racialised people, face disproportionate levels of police intervention and police violence in Europe and across the world. It is not surprising then, that the police forces’ uptake of new technologies follows the same patterns.

Many of the most high-profile discriminatory examples of facial recognition have been in the US. However, the EU is not without its share of examples of how predictive policing + biometric analysis = a perfect storm of unlawful police discrimination.

Worse still, automated predictive policing is often hidden behind a false claim that technology is neutral and excuses from police forces evading accountability: “The tech told me to do it!”.


Analog predictive policing

A common justification given by governments to explain the over-policing of racialised people is that racialised communities are inherently more criminal. They claim that this is supported by statistics showing that racialised people are more frequently arrested and imprisoned. . However, the only thing that these historical statistics highlight is, in fact, that racialised communities are vastly over-exposed to (often violent) police intervention, and are systematically treated more harshly and punitively by criminal justice systems. These statistics reflect on the actions of police and of justice systems, not on the behaviours or qualities of racialised people.

Systemic discrimination is rooted in analogue predictive policing practices: police (and wider society) making judgements and predictions about an individual based on, for instance, the colour of their skin or the community of which they are a part.

The use of new technologies by police forces makes these practices even more harmful to people’s lives, while hidding under the false pretext of “technological objectivity”.

Automated predictive policing: WHAT is it and HOW is it used?

Automated predictive policing is the practice of applying algorithms on historical data to predict future crime. This could be by using certain group factors (such as someone’s ethnicity, skin colour, facial features, postcode, educational background or who they are friends with) to automatically predict whether they are going to commit a crime.

There is a principle sometimes referred to as “Garbage in, garbage out”. This idea means that if you feed an algorithm with data that reflects bias and unfairness, the results you get will always be biased and unfair.

“Garbage in, garbage out” guides some of the ways law enforcements uses automated predictive policing when:

  • Deciding where to deploy extra police presence. This traps communities that have been over-policed in an inescapable loop of more and more police interventions;
  • Predicting whether people are likely to re-offend, an assesment that can influence whether someone gets parole or not. This means that a person’s liberty is decided based on discriminatory data about other people that the system thinks are similar to that person.

WHY we must FIGHT BACK?

Having certainties in life can be comforting for all of us. However, when the police and the criminal justice system tries to predict crime, it is not possible to know with enough certainty how someone is going to act in the future. Trying to do so will only reinforce and intensify historical patterns of injustice and grow societal inequalities. Introducing algorithmic predictions in policing will only make the poor poorer, the excluded left out of society and those suffering from discrimination, even more discriminated.

As unique humans, with free will, self-determination and the power to change our life path, we have the right to be treated fairly and not punched down by (automated) justice system.


Examples

In the Netherlands, “smart” devices have sprayed the scent of oranges at people that the biometric algorithm thinks are displaying aggressive behaviour. Given the biases and discriminatory assumptions baked into such tech, it is likely that such technologies will disproportionately be used against racialised people. Being followed by the smell of oranges might not seem so bad – but this tech is also being used in the Netherlands to trigger the deployment of an emergency police vehicle responding to what the algorithm predicts is a violent incident: https://edri.org/wp-content/uploads/2021/07/EDRI_RISE_REPORT.pdf [p.92]

In Sweden, the police were fined for using unlawful facial recognition systems, and were particularly criticised for failing to undertake any assessment of how it might infringe on people’s rights to data protection and related rights, such as equality: https://edpb.europa.eu/news/national-news/2021/swedish-dpa-police-unlawfully-used-facial-recognition-app_en

In the Italian city of Como, authorities deployed biometric surveillance systems to identify ‘loitering’ in a park in which stranded migrants were forced to sleep after being stopped at the Swiss-Italian border: https://privacyinternational.org/case-study/4166/how-facial-recognition-spreading-italy-case-como

A Spanish biometric mass surveillance company called Herta Security – which has received funding from the EU – developed facial recognition technology which they say can profile people’s ethnicity. When we challenged them about this being unlawful, they said it isn’t a problem because they would only sell that part of their tech to non-EU countries: https://www.wired.com/story/europe-ban-biometric-surveillance/ and https://www.youtube.com/watch?v=u30vRl70tgM&feature=youtu.be


News and reports:

Learn more about the different ways that biometric mass surveillance affects us, and SIGN the ECI!

From biometric IDs to biometric mass surveillance

We might not even know it, but most of us will have already encountered biometric databases in practice. There are already many databases in the EU, containing faces, fingerprints and other biometric data of millions and millions of people.

One of the ways that governments get this data is by making it necessary for people to submit their sensitive biometric data in order to get an identity card. In some countries, these identity cards are even essential for accessing public services. This means that if you disagree with giving away data that can identify you forever, you could be excluded from hospitals, schools, or even accessing basic services such as electricity or water!


Is a Biometric ID also biometric mass surveillance?

Biometric mass surveillance is a set of practices that use technological tools to analyse data about people’s faces, bodies and behaviours in a generalised or arbitrarily-targeted way, in publicly-accessible spaces. It can be done in different ways, but it always requires some sort of data to perform a comparison.

For example, in the process of biometric identification, an anonymous person will be scanned and matched against an existing database of images of people. In this way, the system will verify whether or not the anonymous person matches anyone in the database. This means that a biometric database is needed in order for the system to be able to identify people. For this reason, biometric databases can form an essential component of biometric mass surveillance infrastructure.

Not all biometric databases automatically equal biometric mass surveillance. However, all biometric databases create the perfect conditions and infrastructures for governments and companies to be able to identify everyone, all of the time.

This is problematic because of the potential for mass surveillance, which has already been enabled in some examples listed below. More, centralising the collection of such sensitive data opens the door to abuses, leaks or hacks, and threats to people’s safety.

Are biometric databases already turned into an infrastructure for biometric mass surveillance?

These data bases are already being used for mass surveillance purposes. These are just some examples:

  • Some governments use databases that they have created themselves – whether for a national digital ID or for another purpose – which allows them to collect together vast amounts of sensitive data about their citizens.
  • Other governments also buy access to databases from private companies (like the notorious ClearviewAI), which can give these companies access to sensitive data and influence over how states analyse and use that data.
  • Also some private companies, like supermarkets, create their own databases, for example to keep track of people that they suspect of shoplifting. This is especially troubling because people are added to this database based solely on the suspicions of a security guard or shopkeeper, without any due process. People could find themselves excluded from shops without any judicial review, as well as as a result of having been unfairly profiled.

Examples

In Poland, children as young as 12 have been required to submit their biometric data to the government, which will form a permanent record of them and creates the potential for mass surveillance: https://edri.org/wp-content/uploads/2021/07/EDRI_RISE_REPORT.pdf pp.118-122

In the Netherlands, 180,000 people are falsely included in the government’s criminal database which is used to perform facial recognition analysis, and 7 million people are included in another biometric database simply for being foreign: https://edri.org/wp-content/uploads/2021/07/EDRI_RISE_REPORT.pdf

In Italy, the police’s SARI database has been used extensively to undertake biometric surveillance and attempts have been made to get permission to use the system in its ‘real-time’ (i.e. mass surveillance) mode. A staggering 8 out of 10 people in the system’s reference database are foreigners: https://edri.org/our-work/face-recognition-italian-borders-ban-biometric-mass-surveillance/

In Greece, the police have set up a mass central biometric database containing fingerprints and facial images of all Greek passport holders, likely without a legal basis (as biometric data from people’s passports is legally supposed to be stored on the passport itself, not in a database): https://edri.org/our-work/reclaim-your-face-update/

In Sweden, the police have been fined for illegally using ClearviewAI’s database: https://edpb.europa.eu/news/national-news/2021/swedish-dpa-police-unlawfully-used-facial-recognition-app_en

In France, the police have been using the enormous ‘TAJ’ database of 8 million images of people involved in police investigations (including people that have been acquitted): https://edri.org/our-work/our-legal-action-against-the-use-of-facial-recognition-by-the-french-police/


News and reports:

Learn more about the different ways that biometric mass surveillance affects us, and SIGN the ECI!

Biometric mass surveillance as general monitoring

Digital technologies make general monitoring of people easier, but these practices have been around for a long time. Infamous surveillance societies throughout history and today have used general monitoring as a way to keep tabs on populations: think the East Germany Stasi or the way in which the government in China surveil the population.

This may sound extreme – but real examples that are appearing across Europe have a lot more in common with these regimes than you might.


So, WHAT IS GENERAL MONITORING?

When we say general monitoring, we’re talking about the use of surveillance devices to spy on every person in a generalised manner.

This could be, for example, by using a camera in a public space (like a park, a street, or a train station). It can also happen in other ways, for example when governments or companies listen in to everyone’s phone calls, or snoop on everyone’s emails, chats, and social media messages.

That’s why another term for general monitoring is mass surveillance.

But why is that harmful?

General monitoring is harmful because it prevents us from enjoying of privacy and anonymity. These democratic principles are incredibly important as they enable us to live our lives with dignity and autonomy.

Depriving people from anonymity and privacy can have real and serious impacts: imagine governments and companies knowing all of your health problems because they’ve tracked which medical establishments you go to over time. I

Imagine being surveilled because you were seen going to an LGBTQ+ bar – especially if you live in a country where LGBTQ+ people do not enjoy full rights. Imagine your future life prospects (e.g. work, university) being limited because you were caught loitering or littering as a teenager.

Another reason why general monitoring is dangerous is because it alters the justice systems and the principles governing it – such as “innocent until proven otherwise”. If governments want to watch us, they are supposed to have a proper and well-justified reason for doing so, because we all have a right to be presumed innocent until proven guilty. With general monitoring, this is flipped on its head: every single person in a particular group or a whole population is treated as a potential suspect.

Below, you can find evidence of biometric systems in the EU watching people for all of these reasons. This only gives us a hint of how this data might be used in the future.


Examples


In the German city of Cologne, hundreds of thousands of people have been placed under biometric surveillance outside LGBTQ+ venues, places of worship and doctors’ surgeries: “The rise and rise of biometric mass surveillance in the EU” [p.20].

Across Europe, people exercising their right to peaceful assembly have been targeted through general biometric monitoring in at least Germany, Austria, Slovenia, the UK, and Serbia, and the French government have also attempted to do this as well. It’s not just streets – we’ve seen similar systems in train stations (Germany), airports (Belgium), football stadiums (Denmark and the Netherlands) and much more.

In the Netherlands, three cities have been turned into ‘Living Labs’ where the general monitoring of people’s biometric data is combined with general monitoring of their social media interactions and other data, creating profiles which are then used to make decisions about their lives and futures: The rise and rise of biometric mass surveillance in the EU” [p.88]

In Greece, the European Commission’s Internal Security Fund gave €3 million to private company Intracom Telecom to develop facial recognition for police to use against suspects, witnesses and victims of crime: https://edri.org/our-work/facial-recognition-homo-digitalis-calls-on-greek-dpa-to-speak-up/

In Czechia, the police bought Cogniware facial recognition software that can predict emotions and gender and according to Cogniware’s website, has the capacity to link every person with their financial information, phone data, the car one drives, workplace, work colleagues, who they meet, places they visit and what they buy: https://edri.org/our-work/czech-big-brother-awards-worst-privacy-culprits/


News & reports



ReclaimYourFace is a movement led by civil society organisations across Europe:

Access Now ARTICLE19 Bits of Freedom CCC Defesa dos Direitos Digitais (D3) Digitalcourage Digitale Gesellschaft CH Digitale Gesellschaft DE Državljan D EDRi Electronic Frontier Finland epicenter.works Hermes Center for Transparency and Digital Human Rights Homo Digitalis IT-Political Association of Denmark IuRe La Quadrature du Net Liberties Metamorphosis Foundation Panoptykon Foundation Privacy International SHARE Foundation
In collaboration with our campaign partners:

AlgorithmWatch AlgorithmWatch/CH All Out Amnesty International Anna Elbe Aquilenet Associazione Luca Coscioni Ban Facial Recognition Europe Big Brother Watch Certi Diritti Chaos Computer Club Lëtzebuerg (C3L) CILD D64 Danes je nov dan Datapanik Digitale Freiheit DPO Innovation Electronic Frontier Norway European Center for Not-for-profit Law (ECNL) European Digital Society Eumans Football Supporters Europe Fundación Secretariado Gitano (FSG) Forum InformatikerInnen für Frieden und gesellschaftliche Verantwortung Germanwatch German acm chapter Gesellschaft Fur Informatik (German Informatics Society) GONG Hellenic Association of Data Protection and Privacy Hellenic League for Human Rights info.nodes irish council for civil liberties JEF, Young European Federalists Kameras Stoppen Ligue des droits de L'Homme (FR) Ligue des Droits Humains (BE) LOAD e.V. Ministry of Privacy Privacy first logo Privacy Lx Privacy Network Projetto Winston Smith Reporters United Saplinq Science for Democracy Selbstbestimmt.Digital STRALI Stop Wapenhandel The Good Lobby Italia UNI-Europa Unsurv Vrijbit Wikimedia FR Xnet


Reclaim Your Face is also supported by:

Jusos Piratenpartei DE Pirátská Strana

MEP Patrick Breyer, Germany, Greens/EFA
MEP Marcel Kolaja, Czechia, Greens/EFA
MEP Anne-Sophie Pelletier, France, The Left
MEP Kateřina Konečná, Czechia, The Left



Should your organisation be here, too?
Here's how you can get involved.
If you're an individual rather than an organisation, or your organisation type isn't covered in the partnering document, please get in touch with us directly.