News




Surveilling Europe’s edges: when research legitimises border violence

In May 2024, EDRi member Access Now’s Caterina Rodelli travelled across Greece to meet with local civil society organisations supporting migrant people and monitoring human rights violations, and to see first-hand how and where surveillance technologies are deployed at Europe’s borders.

Pushback practices have been recorded in Evros since the late 1980s

In the second installment of a three-part blog series, EDRi member Access Now’s Caterina Rodelli explains how EU-funded research projects on border surveillance are legitimising violent migration policies. Catch up on part one here.

As I drove along Greece’s E85 highway from the coastal city of Alexandroupoli towards the northernmost city of Orestiada, a grey wall appeared as I passed the small town of Soufli, known for its silk industry. The more I drove northwards, the more the wall unravelled itself, its sharpened edges and surveillance towers spreading down the road.

This wall runs along the Evros river, which marks the 200 kilometre natural border between Türkiye and Greece. It was first built in 2011 as a wire fence, after the Greek government, inspired by the US-Mexico border, ordered its construction as a purported solution to keep out migrants seeking safety. Other walls have since been erected in the Evros region; physical manifestations of the “embeddedness and longevity of border violence.” Pushback practices — i.e. the informal and illegal expulsion of people across borders without due process — have been an integral element at many European borders since the early 2000s and have been recorded in Evros since the late 1980s.

Human rights groups, journalists, and researchers have documented several instances of abuse with sometimes fatal consequences

Human rights groups, journalists, and researchers have documented several instances of abuse in this part of Greece: from trapping people in houses where they are subjected to degrading and violent treatment and pushing back migrants with sometimes fatal consequences, including the reported drowning of a four-year-old child, to instances of sexual assault.

Moreover, both migrants and those who defend their rights have been targeted with multiple smear campaigns, including one that blamed migrants for sparking wildfires that devastated more than 3002 miles of land in 2023.

Meanwhile in the same region, the EU has been a long-time supporter of multiple research projects on border surveillance. As part of the now-defunct Horizon 2020 and ongoing Horizon Europe schemes, the EU has funded programmes testing integrated advanced border surveillance technologies, such as thermal imaging cameras, radio frequency analysis systems, mobile unmanned vehicles (including drones, and autonomous ground / sea vehicles), or pylons mounted with tracking sensors.

Researcher Lena Karamanidou has been monitoring the development of these research projects, specifically in the Evros river’s delta, as part of the Border Violence Monitoring Network’s (BVMN) work on border surveillance technologies. They have found evidence that two Horizon 2020 projects, Nestor and Andromeda, have been tested in the area and used by local police authorities for real-life border control activities, outside of the research framework. These projects, which claimed to improve border surveillance solutions, were implemented by a pan-European consortium of public bodies, private companies, and research institutes. Nestor focused on testing the “next generation holistic border surveillance system providing pre-frontier situational awareness beyond maritime and land border areas,” while Andromeda tested solutions for improving information sharing among authorities engaged in border management.

The research projects in the Evros area raise two important questions

The roll-out of these research projects in the Evros area, where violence against migrant people is a near daily occurrence, raise two important questions:

  1. What exactly does the EU mean when it talks about safety?  The use of border surveillance technology is often justified under safety pretexts; however, several examples show that it has instead been used to facilitate push-backs in the Mediterranean sea or to conduct enforced disappearances at external borders. Given that the Evros river is an extremely surveilled and violent area, it is clear that this approach to safety does not include migrants’ safety, nor does it guarantee respect for the human rights of all.
  2. What is the role of research in legitimising policies of border violence?
    In the same way that biased research and pseudoscience has historically been used to justify racial discrimination, there is a high risk that EU’s funding of public research into border surveillance could lend a veneer of scientific objectivity to some of its violent migration practices, and policies. Tested across the EU, these programmes usually focus on demonstrating the systems’ accuracy, without questioning the problematic underlying assumption that migrant and racialised people are an inherent threat to Europe.

What European policymakers need to know — and do

Surveillance does not equal safety for all. While surveillance technology is deployed on security grounds, violence against migrant people continues daily. In addition, many of the EU-funded research projects testing and deploying this technology are doing so outside of any legislative framework, which further reduces accountability. EU policymakers must:

  • Stop using EU public research funds for border surveillance testing programmes;
  • Ensure transparency when the technology tested in any research projects is subsequently used in the field; and
  • Commit to ending any inhumane treatment and violations of migrants’ human rights at Europe’s borders.

For any questions regarding this blog, please contact caterina@accessnow.org. If you or your organisation needs digital security support, please contact help@accessnow.org.

This article was first published here by EDRi memberAccess Now.

This is the second instalment of a three-part blog series. Catch up on part one here.

Contribution by: Caterina Rodelli, EU Policy Analyst, EDRi member, Access Now

Biometric surveillance in the Czech Republic: the Ministry of the Interior is trying to circumvent the Artificial Intelligence Act

EDRi-member Iuridicum Remedium draws attention to the way biometric surveillance at airports should be legalised in the Czech Republic. According to the proposal, virtually anyone could become a person under surveillance. Moreover, surveillance could be extended from airports to other public spaces.

In September, the Czech government adopted a proposal by the Ministry of the Interior to legalise automated facial recognition at international airports. The government approved the proposal with changes: the time limit on the retention of unrecognised data on all airport visitors will be reduced from 90 to 30 days.

However, the main problem regarding the judicial authorisation to include a person in the reference database remains. According to the EU Artificial Intelligence (AI) Act, the use of a facial recognition system on a specific person must be authorised by a court or other independent authority. But according to the Czech Ministry of the Interior, the court could also authorise entire “predefined categories of persons”. With such a vague definition, virtually anyone can be included in the database.

This effectively denies the basic control mechanism that the complexly negotiated AI Act brought. The aim is clearly to keep the system at the airport running in essentially the same way as it has been operating since 2018, even if it is – according to our analysis – contrary to the law.

The current proposal only concerns the airport. However, the Ministry did not originally envisage such a restriction at all. It was only after comments from civil society that the law was limited to the use of airport systems. However, the explanatory memorandum shows that the ministry is certainly not opposed to extending the systems to other public spaces.

IuRe have therefore launched a special campaign against biometric surveillance. The website The Czech Republic is not China (only in Czech) introduces the public to the current issue in the form of a quiz. It has been shown in the past that changes can be achieved with the right social pressure, and hopefully it will be possible to do so again.

Contribution by: Hynek Trojánek, EDRi-member, Iuridicum Remedium

EDRi and Reclaim Your Face campaign recognised as Europe AI Policy leaders

EDRi and the Reclaim Your Face coalition were recognised as the Europe AI Policy Leader in Civil Society for our groundbreaking work as a coalition to advocate for a world free from biometric mass surveillance.

Last week, on 21 May 2024, EDRi was awarded as the Europe AI Policy Leader in Civil Society by the Center for AI and Digital Policy Europe. The honour was specifically bestowed upon EDRi’s Reclaim Your Face campaign, which started in November 2020. Through this campaign, we called for for an end to biometric mass surveillance practices in Europe, including through the EU Artificial Intelligence (AI) Act.

EDRi's Andreea Belu and Ella Jakubowska receiving the award at CPDP
EDRi’s Ella Jakubowska and Andreea Belu receiving the Europe AI Policy Leader award at CPDP 2024 opening ceremony.

This honour is a recognition of the work done by the coalition – over 110 organisations across 25 EU countries, hundreds of volunteers, and many actions taken by hundreds of thousands of supporters all over Europe. The award was presented in Brussels during the opening ceremony of CPDP Conference, and received on behalf of the coalition by Ella Jakubowska and Andreea Belu of the EDRi Brussels Office.

Another recognition for the Reclaim Your Face campaign: Digital civic engagement honourary award

This honour adds to a previous recognition of the campaign’s achievements since it started. In December 2022, the Reclaim Your Face coalition received the honorary award for Digital civic engagement – a cooperation between LOAD e.V., the Friedrich Naumann Foundation for Freedom and the Thomas Dehler Foundation. The coalition was represented at the ceremony in Munich, Germany by Andreea Belu (EDRi Brussels office), Matthias Marx (Chaos Computer Club), and Konstantin Macher (formerly from Digitalcourage).

Andreea Belu (EDRi Brussels office – centre), Matthias Marx (Chaos Computer Club – left), and Konstantin Macher (formerly from Digitalcourage- right) receiving the award in Munich, Germany in December 2022.

What’s next for our fight against biometric mass surveillance?

We’re grateful for the recognition of our work, and the honour of being able to represent over a million people across the EU who want a ban on intrusive surveillance practices. Despite the disappointment of the final Artificial Intelligence (AI) Act, which is full of holes when it comes to bans on different forms of biometric mass surveillance (BMS), we’re looking to the future. There are some silver linings in the legislation which give us opportunities to oppose BMS in public spaces and to push for better protection of people’s sensitive biometric data. Read more about how we’re planning to continue fighting for a world free from biometric mass surveillance. You can also look at our living legal and practical guide for civil society organisations, academics, communities and activists, which charts a human rights-based approach for how to keep resisting BMS practices now and in the future.

The future of our fight against biometric mass surveillance

The final AI Act is disappointingly full of holes when it comes to bans on different forms of biometric mass surveillance (BMS). Despite this, there are some silver linings in the form of opportunities to oppose BMS in public spaces and to push for better protection of people’s sensitive biometric data.

Throughout spring 2024, European Union (EU) lawmakers have been taking the final procedural steps to pass a largely disappointing new law, the EU Artificial Intelligence (AI) Act.

This law is expected to come into force in the summer, with one of the most hotly-contested parts of the law – the bans on unacceptably harmful uses of AI – slated to apply from the end of 2024 (six months and 20 days after the legal text is officially published).

The first draft of this Act, in 2021, proposed to ban some forms of public facial recognition, showing that lawmakers were already listening to the demands of our Reclaim Your Face campaign. Since then, the AI Act has continued to be a focus point for our fight to stop people being treated as walking barcodes in public spaces.

But after a gruelling three-year process, AI Act negotiations are coming to an underwhelming end, with numerous missed opportunities to protect people’s rights and freedoms or to uphold civic space.

One of the biggest problems we see is that the bans on different forms of biometric mass surveillance, or BMS, are full of holes. BMS is the term we’ve used as an umbrella for different methods of using people’s biometric data to surveil them in an untargeted or arbitrarily-targeted way – which have no place in a democratic society.

At the same time, all is not lost. As we get into the nitty-gritty of the final text, and reflect on the years of hard work, we mourn the existence of the dark clouds – and we celebrate the silver linings and the opportunities they create to better protect people’s sensitive biometric data.

Legitimising biometric mass surveillance

Whilst the AI Act is supposed to ban a lot of unacceptable biometric practices, we’ve argued since the beginning that it could instead become a blueprint for how to conduct BMS.

As we predicted, the final Act takes a potentially dystopian step towards legalising live public facial recognition – which so far has never been explicitly allowed in any EU country. The same goes for pseudo-scientific AI ‘mind-reading’ systems, which the AI Act shockingly allows states to use in policing and border contexts. Using machines to categorise people’s gender and other sensitive characteristics, based on how they look, is also allowed in several contexts.

We have long argued that these practices can never be compatible with our fundamental rights to dignity, privacy, data protection, free expression and non-discrimination. By allowing them in a range of contexts, the AI Act legitimises these horrifying practices.

Reasons for hope

Yet whilst the law falls far short of the full ban on biometric mass surveillance in public spaces that we called for, it nevertheless offers several points to continue our fight in the future. To give one example, we have the powerful opportunity to capitalise on the wide political will in support of our ongoing work against BMS to make sure that the AI Act’s loopholes don’t make it into national laws in EU member states.

Our upcoming ‘Legal and practical guide to fighting BMS after the AI Act’ is therefore intended to inform and equip those who are reflecting and re-fuelling for the next stage in the fight against BMS.

This guide will lay out where we can use the AI Act’s opportunities to fight for better protections for our rights to exist free from BMS in public spaces. This includes charting out more than 10 specific advocacy opportunities including formal and informal spaces to influence, and highlighting the parts of the legal text that create space for our advocacy efforts.

A precedent for banning dangerous AI

We also remind ourselves that whilst the biometrics bans have been dangerously watered down, the Act nevertheless accepts that we must ban AI systems that are not compatible with a democratic society. This idea has been a vital concept for those of us working to protect human rights in the age of AI, and we faced a lot of opposition on this point from industry and conservative lawmakers.

This legal and normative acceptance of the need for AI bans has the potential to set an important global precedent for putting the rights and freedoms of people and communities ahead of the private interests of the lucrative security and surveillance tech industry. The industry wants all technologies and practices to be on the table – but the AI Act shows that this is not the EU’s way.

By Ella Jakubowska, Head of Policy, EDRi

The colonial biometric legacy at heart of new EU asylum system

On Wednesday (10 April), the EU is set to vote on a new set of asylum and migration reforms. Among the many controversial changes proposed in the new migration pact, one went almost unnoticed — a seemingly innocent reform of the EU’s asylum database, EURODAC. Although framed as purely technical adjustments, the reality is far more malicious. The changes to EURODAC will massively exacerbate violence against people on the move.

On Wednesday (10 April), the EU is set to vote on a new set of asylum and migration reforms. Among the many controversial changes proposed in the new migration pact, one went almost unnoticed — a seemingly innocent reform of the EU’s asylum database, EURODAC.

Although framed as purely technical adjustments, the reality is far more malicious. The changes to EURODAC will massively exacerbate violence against people on the move.

Reform of this 20 year-old database will make it the technological sword of EU’s hostile asylum and border policies. It will harness the most nefarious surveillance technologies that exist to date — namely the capture, processing and analysis of biometric data — and enable EU states to have full control over migrants’ body and movements.

With the collection of biometrics, the body has already become a “passport” for many. Biometrics is the process of making data out of a person’s biological or physiological characteristics. Fingerprints, facial images and iris scans are among the forms of biometrics most widely used by states to uniquely identify a person.

Historically, the identification of every single individual is key to the organisation of state control and domination over the population. In particular, it allows state authorities to track, monitor and restrict people’s movements.

It’s no surprise that biometrics are becoming the centrepiece of states’ expanding technological surveillance systems and, it’s even less surprising, that they’re a part of migration control policies. The very origin of biometric surveillance stems from colonial practices of dominating and discriminating against certain groups of people.

All the way back to the slave trade

The transatlantic slave trade developed technologies to mark, identify and track African people as captives and property at a global scale. Forensic identification methods which include detailed description of facial and bodily features as well as inked fingerprints and photographs of criminal suspects — were mostly applied in the French Empire’s colonies to guarantee order and the continuity of the colonial regime.

Likewise, British colonists ran the first large-scale biometric identity programme involving fingerprinting for controlling people in India. Thus, biometric registration as a replacement for documents and identity proof first became a reality for Black, brown and Asian bodies, especially those who were on the move.

The EU’s policies are just a continuation of this draconian history. Its first centralised biometric database, the European Asylum Dactyloscopy Database (EURODAC), was built to control secondary movements of asylum-seekers within the EU and to register people who irregularly cross external borders.

With the ongoing reform of EURODAC, the mass and routine identification of asylum-seekers, refugees and migrants through biometric data processing will become the building block of the EU’s inhumane asylum system.

The proposed reform is presented as a “mere technicality”, yet its transformation is in fact highly political — it will codify in technology the violent treatment given to migrants in the EU. This means systematic criminalisation, detention in prison-like conditions and swift expulsion.

One of the proposed reforms that expands the scope of EURODAC is to capture of people’s facial images in addition to fingerprints.

Collecting additional biometric data has been justified by policymakers because it was reported that some asylum seekers voluntarily burn or damage their fingers to alter their fingerprints and avoid identification.

For people on the move, identification implies an imminent risk of being detained, sent back to another EU state they left — usually because of dreadful reception conditions and few opportunities for integration — or to be deported to so-called “safe third countries” where they risk persecution and torture. Instead of seeing people forced to harm themselves to avoid identification as a sign that migration policies need to be more humane, the EU has decided to further surveil and terrorise migrants.

EURODAC is also being turned into a mass surveillance tool by targeting even more groups of people than before — including children as young as six.

Despite some weak attempts to require that the data collection is done in a ‘child-friendly’ manner, it will not change the outcome — children will be subjected to a seriously invasive and unjustified procedure that de facto stigmatise them.

Consider that in the EU, children younger than 16 are not even able to freely consent to the processing of their personal data under the General Data Protection Regulation (GDPR). Meanwhile, migrant children will have their faces scanned and fingerprints taken in border camps and detention centres.

Also, police authorities will be able to access the EURODAC data without having to meet almost any conditions — treating all asylum-seekers and refugees with a presumption of illegality.

EU’s racist double standards exemplified

The use of biometric surveillance in EURODAC has just one explicit purpose — to increase power and control over migrants who have been made socially vulnerable by unfair migration policies and practices. It is intrusive, disproportionate, and contradicts Europe’s own data protection golden standards.

The EU is currently building a regime of exception within its own legal framework for privacy and data protection, in which people on the move gets a differentiated treatment.

The EURODAC reform also demonstrates a larger trend in Europe of increasing criminalisation and the logic of ‘policing’. The EU blends migration management and the fight against crime by equating people seeking safety with security threats. This criminalisation lens leads to discriminatory assumptions and associations, resulting in racialised people and migrants being over-surveilled and targeted.

With the massive expansion of centralised databases, EURODAC being a prime example, the EU can no longer hide its racist double standards.

This article was first publishe here by EUobserver.

Contribution by: Laurence Meyer, Racial and social justice lead, EDRi member, Digital Freedom Fund (DFF) & Chloé Berthélémy, Senior Policy Advisor, EDRi

EU’s AI Act fails to set gold standard for human rights

A round-up of how the EU Artificial Intelligence (AI) Act fares against the collective demands of a broad civil society coalition that advocated for prioritising the protection of fundamental human rights in the law.

For the last three years, EDRi has worked in coalition with a broad range of digital, human rights and social justice groups to demand that artificial intelligence (AI) works for people, prioritising the protection of fundamental human rights. We have put forward our collective vision for an approach where “human-centric” is not just a buzzword, where people on the move are treated with dignity, and where lawmakers are bold enough to draw red lines against unacceptable uses of AI systems.

Following a gruelling negotiation process, EU institutions are expected to conclusively adopt the final AI Act in April 2024. But while they celebrate, we take a much more critical stance. We want to highlight the many missed opportunities to make sure that our rights to privacy, equality, non-discrimination, the presumption of innocence and many other rights and freedoms are protected when it comes to AI. Here’s our round-up of how the final law fares against our collective demands.

Please note that this analysis is based on the latest available version of the AI Act text, dated 6 March 2024. There may still be small changes made before the law’s final adoption.

We called on EU lawmakers to empower affected people by upholding a framework of accountability, transparency, accessibility and redress. How did they do?

Some accessibility barriers have been broken down, but more needs to be done:

  • Article 16 (ja) of the AI Act fulfills our call for accessibility by stating that high-risk AI systems must comply with accessibility requirements. However, we still believe that this should be extended to apply to low and medium risk AI systems as well, in order to ensure that the needs of people with disabilities are central in the development of all AI systems which could impact them.

More transparency about certain AI deployments but big loopholes for the private sector and security agencies:

  • The AI Act establishes a publicly-accessible EU database to provide transparency about AI systems that pose higher risks to people’s rights or safety. While originally only providers of high-risk AI systems were subject to transparency requirements, we successfully persuaded decision-makers that deployers of AI system – those who actually use the system – shall also be subject to transparency obligations.
  • Those providers and deployers will be subject to transparency obligations who put on the market or use AI systems in high-risk areas – such as in the areas of employment and education – as designated by Annex III. Providers will be required to register their high-risk system in the database and to enter information about it such as the description of its intended purpose, concise description of the information used by the system and its operating logic. Deployers of high risk AI systems who are public authorities – or those acting on their behalf – will be obliged to register the use of the system. They will be required to enter information in the database such as a summary of the findings of a fundamental rights impact assessment (FRIA) and a summary of the data protection impact assessment.  However, deployers of high-risk AI systems in the private sector area will not be required to register the use of high-risk systems – another critical issue
  • The major shortcoming of the EU database is that negotiators agreed on a carve-out for law enforcement, migration, asylum and border control authorities. Providers and deployers of high-risk systems in these areas will be requested to register only a limited amount of information, and only in a non-publicly accessible section of the database. Certain important pieces of information, such as the training data used, will not be disclosed at all. This will prevent affected people, civil society, journalists, watchdog organisations and academics to exercise public scrutiny in these high-stake areas which are prone to fundamental rights violation, and hold them accountable.

Fundamental rights impact assessments are included, but concerns remain about how meaningful they will be:

  • We successfully convinced EU institutions of the need for fundamental rights impact assessments (FRIAs). However, based on the final AI Act text, we have doubts whether it will actually prevent human rights violations and serve as a meaningful tool of accountability. We see three primary shortcomings:
  1. Lack of meaningful assessment and the obligation to prevent negative impacts: while the new rules require deployers of high-risk AI systems to list risks of harm to people, there is no explicit obligation to assess whether these risks are acceptable in light of fundamental rights law, nor to prevent them wherever possible. Regrettably, deployers only have to specify which measures will be taken once risks materialise, likely once the harm has already been done
  2. No mandatory stakeholder engagement: the requirement to engage external stakeholders, including civil society and people affected by AI, in the assessment process was also removed from the article at the last stages of negotiations. This means that civil society organisations will not have a direct, legally-binding way to contribute to impact assessments
  3. Transparency exceptions for law enforcement and migration authorities: while in principle, deployers of high-risk AI systems will have to publish the summary of the results of FRIAs, this will not be the case for law enforcement and migration authorities. The public will not even have access to mere information that an authority uses a high-risk AI system in the first place. Instead, all information related to the use of AI in law enforcement and migration will only be included in a non-public database, severely limiting constructive public oversight and scrutiny. This is a very concerning development as, arguably, the risks to human rights, civic space and rule of law are the most severe in these two areas. Moreover, while deployers are obliged to notify the relevant market surveillance authority of the outcome of their FRIA, there is an exemption to comply with this obligation to notify for ‘exceptional reasons of public security’. This excuse is often misused as a justification to carry on disproportionate policing and border management activities.

When it comes to complaints and redress, there are some remedies, but no clear recognition of “affected person”:

  • Civil society has advocated for robust rights and redress mechanisms for individuals and groups affected by high-risk AI systems. We have demanded the creation of a new section titled ‘Rights of Affected Persons’, which would delineate specific rights and remedies for individuals impacted by AI systems. However, the section has not been created but instead, we have a “remedies” chapter that includes only some of our demands;
  • This chapter of remedies includes the right to lodge complaints with a market surveillance authority, but lacks teeth, as it remains unclear how effectively these authorities will be able to enforce compliance and hold violators accountable. Similarly, the right to an explanation of individual decision-making processes, particularly for AI systems listed as high-risk, raises questions about the practicality and accessibility of obtaining meaningful explanations from deployers. Furthermore, the effectiveness of these mechanisms in practice remains uncertain, given the absence of provisions such as the right to representation of natural persons, or the ability for public interest organisations to lodge complaints with national supervisory authorities.

The Act allows a double standard when it comes to the human rights of people outside the EU:

  • The AI Act falls short of civil society’s demand to ensure that EU-based AI providers whose systems impact people outside of the EU are subject to the same requirements as those inside the EU. The Act does not stop EU-based companies from exporting AI systems which are banned in the EU, therefore creating a huge risk of violating rights of people in non-EU countries by EU-made technologies that are essentially incompatible with human rights. Additionally, the Act does not require exported high-risk systems to follow the technical, transparency or other safeguards otherwise required when AI systems are intended for use within the EU, again risking the violation of rights of people outside of the EU by EU-made technologies.

Secondly, we urged EU lawmakers to limit harmful and discriminatory surveillance by national security, law enforcement and migration authorities. How did they do?

The blanket exemption for national security risks undermining other rules:

  • The AI Act and its safeguards will not apply to AI systems if they are developed or used solely for the purpose of national security, and regardless of whether this is done by a public authority or a private company. This exemption introduces a significant loophole that will automatically exempt certain AI systems from scrutiny and limit the applicability of human rights safeguards envisioned in the AI Act.
  • In practical terms, it would mean that governments could invoke national security to introduce biometric mass surveillance systems, without having to apply any safeguards envisioned in the AI Act, without conducting a fundamental rights impact assessment and without ensuring that the AI system meets high technical standards and does not discriminate against certain groups.
  • Such a broad exemption is not justified under EU treaties and goes against established jurisprudence of the European Court of Justice. While national security can be a justified ground for exceptions from the AI Act, this has to be assessed case-by-case, in line with the EU Charter of Fundamental Rights. The adopted text, however, makes national security a largely digital rights-free zone. We are concerned about the lack of clear national-level procedures to verify if the national security threat invoked by the government is indeed legitimate and serious enough to justify the use of the system and if the system is developed and used with respect for fundamental rights. The EU has also set a worrying precedent regionally and globally; broad national security exemptions have now been introduced in the newly-adopted Council of Europe Convention on AI.

Predictive policing, live public facial recognition, biometric categorisation and emotion recognition are only partially banned, legitimising these dangerous practices:

  • We called for comprehensive bans against any use of AI that isn’t compatible with rights and freedoms – such as proclaimed AI ‘mind reading’, biometric surveillance systems that treat us as walking barcodes, or algorithms used to decide whether we are innocent or guilty. All of these examples are now partially banned in the AI Act, which is an important signal that the EU is prepared to draw red lines against unacceptably harmful uses of AI.
  • At the same time, all of these bans contain significant and disappointing loopholes, which means that they will not achieve their full potential. In some cases, these loopholes risk having the opposite effect from what a ban should: they give the signal that some forms of biometric mass surveillance and AI-fuelled discrimination are legitimate in the EU, which risks setting a dangerous global precedent.
  • For example, the fact that emotion recognition and biometric categorisation systems are prohibited in the workplace and in education settings, but are still allowed when used by law enforcement and migration authorities, signal that the EU’s will to test the most abusive and intrusive surveillance systems against the most marginalised in society.
  • Moreover, when it comes to live public facial recognition, the Act paves the way to legalise some specific uses of these systems for the first time ever in the EU – despite our analysis showing that all public-space uses of these systems constitute an unacceptable violation of everyone’s rights and freedoms.

The serious harms of retrospective facial recognition are largely ignored:

  • When it comes to retrospective facial recognition, this practice is not banned at all by the AI Act. As we have explained, the use of retrospective (post) facial recognition and other biometric surveillance systems (called ‘remote biometric identification’, or ‘RBI’ in the text) are just as invasive and rights-violating as live (real-time) systems. Yet the AI Act makes a big error in claiming that the extra time for retrospective uses will mitigate possible harms. While several lawmakers have argued that they managed to insert several safeguards, our analysis is that the safeguards are not meaningful enough and could be easily circumvented by police. In one place, the purported safeguard even suggests that simply the suspicion of any crime having taken place would be enough to justify the use of a post RBI system – a lower threshold than we currently benefit from now under EU data protection law.

People on the move are not afforded the same rights as everyone else, with only weak – and at times absent – rules on the use of AI at borders and in migration contexts:

  • In its final version, the EU AI Act sets a dangerous precedent for the use of surveillance technology against migrants, people on the move and marginalised groups. The legislation develops a separate legal framework for the use of AI by migration control authorities, in order to enable the testing and the use of dangerous surveillance technologies at the EU borders and disproportionately against racialised people.
  • None of the bans meaningfully apply to the migration context, and the transparency obligations present ad-hoc exemptions for migration authorities, allowing them to act with impunity and far away from public scrutiny.
  • The list of high-risk systems fails to capture the many AI systems used in the migration context, as it excludes dangerous systems such as non-remote biometric identification systems, fingerprint scanners, or forecasting tools used to predict, interdict, and curtail migration.
  • Finally, AI systems used as part of EU large-scale migration databases (e.g. Eurodac, the Schengen Information System, and ETIAS) will not have to be compliant with the Regulation until 2030, which gives plenty of time to normalise the use of surveillance technology.

Third, we urged EU lawmakers to push back on Big Tech lobbying; and to remove loopholes that undermine the regulation. How did they do?

The risk classification framework has become a self-regulatory exercise:

  • Initially, all use cases included in the list of high-risk applications would have had to follow specific obligations. However, as a result of heavy industry lobbying, providers of high-risk systems will be now able to decide if their systems is high-risk or not, as an additional “filter” was added into that classification system.
  • Providers will still have to register sufficient documentation in the public database to explain why they don’t consider their system to be high-risk. However, this obligation will not apply when they are providing systems to law enforcement and migration authorities. This will pave the way for the free and deregulated procurement of surveillance systems in the policing and border contexts.

The Act takes only a tentative first step to address environmental impacts of AI:

  • We have serious concerns about how the exponential use of AI systems can have severe impacts on the environment, including through resource consumption, extractive mining, and energy-intensive processing. Today, information on the environmental impacts of AI is a closely-guarded corporate secret. This makes it difficult to assess the environmental harms of AI and to develop political solutions to reduce carbon emissions and other negative impacts.
  • The first draft of the AI Act completely neglected these risks, despite civil society and researchers repeatedly calling for the energy consumption of AI systems to be made transparent. To address this problem, the AI Act now requires that providers of GPAI models that are trained with large amounts of data and consume a lot of electricity must document their energy consumption. The Commission now has the task of developing a suitable methodology for measuring the energy consumption in a comparable and verifiable way.
  • The AI Act also requires that standardised reporting and documentation procedures must be created to ensure the efficient use of resources by some AI systems. These procedures should help to reduce the energy and other resource consumption of high-risk AI systems during their life cycle. These standards are also intended to promote the energy-efficient development of general-purpose AI models.
  • These reporting standards are a crucial first step to provide basic transparency about some ecological impacts of AI, first and foremost the energy use. But they can only serve as a starting point for more comprehensive policy approaches that address all environmental harms along the AI production process, such as water and minerals. We cannot rely on self-regulation, given how fast the climate crisis is evolving.

What’s next for the AI Act?

The coming year will be decisive for the EU’s AI Act, with different EU institutions, national lawmakers and even company representatives setting standards, publishing interpretive guidelines and driving the Act’s implementation across the EU’s member countries. Some parts of the law – the prohibitions – could become operational as soon as November 2024. It is therefore vital that civil society groups are given a seat at the table, and that this work is not done in opaque settings and behind closed doors.

We urge lawmakers around the world who are also considering bringing in horizontal rules on AI to learn from the EU’s many mistakes outlined above. A meaningful set of protections must ensure that AI rules truly work for individuals, communities, society, rule of law, and the planet.

While this long chapter of lawmaking is now coming to a close, the next chapter of implementation – and trying to get as many wins out of this Regulation as possible – is just beginning. As a group, we are drafting an implementation guide for civil society, coming later this year. We want to express our thanks to the entire AI core group, who have worked tirelessly for over three years to analyse, advocate and mobilise around the EU AI Act. In particular, we thank the work, dedication and vision of Sarah Chander, of the Equinox Initiative for Racial Justice, for her leadership of this group in the last three years.(ECF)

Authors:

• Ella Jakubowska, EDRi
• Kave Noori, EDF
• Mher Hakobyan, Amnesty International
• Karolina Iwańska, ECNL
• Kilian Vieth-Ditlmann, AlgorithmWatch
• Nikolett Aszodi, AlgorithmWatch
• Judith Membrives Llorens, Lafede.cat / Algorights
• Caterina Rodelli, Access Now
• Daniel Leufer, Access Now
• Nadia Benaissa, Bits of Freedom
• Ilaria Fevola, Article 19

EU AI Act will fail commitment to ban biometric mass surveillance (Deutsche Version unten)

On 8 December 2023, EU lawmakers celebrated reaching a deal on the long-awaited Artificial Intelligence (AI) Act. Lead Parliamentarians reassured their colleagues that they had preserved strong protections for human rights, including ruling out biometric mass surveillance (BMS).

Yet despite the lawmakers’ bravado, the AI Act will not ban the vast majority of dangerous BMS practices. Instead, it will introduce – for the first time in the EU – conditions on how to use these systems. Members of the European Parliament (MEPs) and EU Member State ministers will vote on whether they accept the final deal in spring 2024.

The EU is making history – for the wrong reasons

The Reclaim Your Face coalition has long argued that BMS practices are error-prone and risky by design, and have no place in a democratic society. Police and public authorities already have so much information about each of us at their fingertips; they do not need to be able to identify and profile us all of the time, objectifying our faces and bodies at the push of a button.

Yet despite a strong negotiating position from the European Parliament which called to ban most BMS practices, very little survived the AI Act negotiations. Under pressure from law enforcement representatives, the Parliament were cornered into accepting only weak limitations on intrusive BMS practices.

One of the few biometrics safeguards which had apparently survived the negotiations – a restriction on the use of retrospective facial recognition – has since been gutted in subsequent so-called ‘technical’ discussions.

Despite promises the from Spanish representatives in charge of the negotiations that nothing substantive would change after 8 December, this watering-down of protections against retrospective facial recognition is a further letdown in our fight against a BMS society.

What’s in the deal?

Based on what we have seen of the final text, the AI Act is set to be a missed opportunity to protect civil liberties. Our rights to attend a protest, to access reproductive healthcare, or even to sit on a bench could still be jeopardised by pervasive biometric surveillance. Restrictions on the use of live and retrospective facial recognition in the AI Act will be minimal, and will not apply to private companies or administrative authorities.

We are also disappointed that when it comes to so-called ‘emotion recognition’ and biometric categorisation practices, only very limited use cases are banned in the final text, with huge loopholes.

This means that the AI Act will permit many forms of emotion recognition – such as police using AI systems to predict who is or is not telling the truth – despite these systems lacking any credible scientific basis. If adopted in this form, the AI Act will legitimise a practice that throughout history has been linked to eugenics.

Police categorising people in CCTV feeds on the basis of their skin colour is also set to be allowed. It’s hard to see how this could be permissible given that EU law prohibits discrimination – but apparently, when done by a machine, the legislators consider this to be acceptable.

Yet at least one thing had stood out for positive reasons after the final negotiation: the deal would limit post (retrospective) public facial recognition to the pursuit of serious cross-border crimes. Whilst the Reclaim Your Face campaign had called for even stronger rules on this, it would nevertheless have been a significant improvement from the current situation, where EU member states use retrospective facial recognition with abandon.

This would have been a win for the Parliament amid so much ground given up on biometrics. Yet in the time since the final negotiation, pressure from member state governments has seen the Parliament forced to agree to delete the limitation to serious cross-border crimes and to weaken the remaining safeguards [paywalled]. Now, just a vague link to the “threat” of a crime could be enough to justify the use of retrospective facial recognition of public spaces.

Reportedly, the country leading the charge to steamroller our right to be protected from abuses of our biometric data is France. Ahead of the Paris Olympics and Paralympics later this year, France has fought to preserve or expand the powers of the state to eradicate our anonymity in public spaces, and to use opaque and unreliable AI systems to claim to know what we are thinking. The other Member State governments, and the Parliament’s lead negotiators, have failed to stop them.

Under this new law, we will be guilty by algorithm until proven innocent – and the EU will have rubber-stamped biometric mass surveillance. This will give carte blanche to EU countries to roll out more surveillance of our faces and bodies, which in turn will set a chilling global precedent.

EU-Gesetz über Künstliche Intelligenz verfehlt das versprochene Verbot biometrischer Massenüberwachung

Am 8. Dezember 2023 haben die EU-Gesetzgeber eine Einigung über die lange erwartete Verordnung zu Künstlicher Intelligenz (KI-Gesetz / AI Act) gefeiert. Die federführenden Abgeordneten haben ihren Kolleg*innen versichert, dass sie einen starken Schutz von Menschenrechten erreicht hätten, einschließlich dass biometrischen Massenüberwachung ausgeschlossen worden sei.

Doch entgegen der prahlenden Behauptungen der Gesetzgeber*innen, wird das KI-Gesetz die große Mehrheit der gefährlichen biometrischen Überwachungspraktiken nicht verbieten. Stattdessen wird es – zum ersten Mal in der EU – Bedingungen einführen, wie diese Systeme genutzt werden können. Die Mitglieder des Europäischen Parlaments und die Minister*innen der EU-Mitgliedstaaten werden im Frühjahr 2024 darüber abstimmen, ob sie die endgültige Einigung annehmen.

Die EU schreibt Geschichte – aus den falschen Gründen

Das Bündnis „Reclaim Your Face“ erklärt seit langem, dass biometrische Massenüberwachung fehleranfällig und riskant ist und in einer demokratischen Gesellschaft keinen Platz hat. Polizei und Behörden haben bereits Zugriff auf so viele Informationen über jede*n von uns; sie müssen nicht auch noch in der Lage sein, uns jederzeit zu identifizieren, (Bewegungs-)Profile zu erstellen und unsere Gesichter und Körper auf Knopfdruck zu Objekten zu reduzieren.

Doch trotz einer starken Verhandlungsposition des Europäischen Parlaments, das ein Verbot der meisten Praktiken biometrischer Massenüberwachung gefordert hatte, hat im KI-Gesetz davon nur sehr wenig die Verhandlungen überlebt. Das Europäische Parlament hat dem Druck von Vertreter*innen der Strafverfolgungsbehörden nachgegeben und zugelassen, dass es nur schwache Grenzen für die Grundrechtseingriffe durch biometrische Massenüberwachung gibt.

Eine der wenigen Schutzbestimmungen, die offenbar ursprünglich die Verhandlungen überlebt hatten – eine Beschränkung der Verwendung zeitlich nachgelagerter Gesichtserkennung – wurde in den nachfolgenden – vermeintlich “technischen” – Diskussionen wieder ausgehöhlt.

Trotz der Versprechungen der spanischen Verhandlungsführer*innen, dass sich nach dem 8. Dezember nichts Wesentliches ändern würde, ist dieser verwässerte Schutz gegen die rückwirkende Gesichtserkennung eine weitere Enttäuschung in unserem Kampf gegen eine biometrische Überwachungsgesellschaft.

Was steht in der Einigung?

Nach dem was wir vom endgültigen Text einsehen konnten, wird das KI-Gesetz eine verpasste Chance sein, Freiheitsrechte zu schützen. Unser Recht, an einer Demonstration teilzunehmen, Zugang zu reproduktiver Gesundheitsfürsorge zu erhalten oder auch nur auf einer Bank zu sitzen, könnte immer noch durch die allgegenwärtige biometrische Überwachung gefährdet werden. Die Beschränkungen für die Verwendung von Echtzeit- und nachträglicher Gesichtserkennung im KI-Gesetz sind minimal und gelten nicht für private Unternehmen oder Verwaltungsbehörden.

Wir sind auch enttäuscht, dass in Bezug auf die so genannte “Emotionserkennung” und biometrische Kategorisierungsverfahren nur sehr begrenzte Anwendungsfälle im endgültigen Text verboten wurden und große Schlupflöcher übrig bleiben.

Das bedeutet, dass das KI-Gesetz viele Formen der Emotionserkennung zulassen wird – wie z. B. die Verwendung von KI-Systemen durch die Polizei, um vorherzusagen, wer die Wahrheit sagt oder nicht – obwohl diese Systeme keine wissenschaftliche Grundlage haben. Falls es in dieser Form angenommen wird, legitimiert das KI-Gesetz damit eine Praxis, die im Laufe der Geschichte durchgängig mit Eugenik verbunden war.

Die Polizei soll außerdem bei Videoüberwachung berechtigt werden, Menschen anhand ihrer Hautfarbe zu kategorisieren. Es ist schwer vorstellbar, wie das mit EU-Recht vereinbar sein soll, welches Diskriminierung verbietet – aber anscheinend halten die Gesetzgeber*innen das für akzeptabel, wenn es von einer Maschine gemacht wird.

Ein Ergebnis der Trilogverhandlungen war eigentlich positiv aufgefallen: Die Einigung sah vor, die zeitlich nachgelagerte Gesichtserkennung an öffentlichen Plätzen auf die Verfolgung schwerer grenzüberschreitender Straftaten zu beschränken. Die Kampagne Reclaim Your Face hatte zwar noch stärkere Regeln für diesen Bereich gefordert, aber das wäre trotzdem eine erhebliche Verbesserung gegenüber der jetzigen Situation gewesen, in der EU-Mitgliedstaaten retrograde Gesichtserkennung ungehemmt einsetzen.

Das wäre ein Erfolg für das Europäische Parlament gewesen, nachdem es sonst so viel im Bereich Biometrie aufgegeben hat. Doch seit den finalen Verhandlungen im Dezember wurde das Parlament durch den Druck der Mitgliedstaaten gezwungen, die Beschränkung auf schwere grenzüberschreitende Straftaten zu streichen und die verbleibenden Schutzklauseln zu schwächen. Nun könnte ein vager Hinweis auf die “Bedrohung” durch eine Straftat ausreichen, um den Einsatz einer nachträglichen Gesichtserkennung in öffentlichen Räumen zu rechtfertigen.

Berichten zufolge ist die Regierung von Frankreich dafür verantwortlich unser Recht, vor dem Missbrauch unserer biometrischen Daten geschützt zu werden, mit Füßen zu treten. Im Vorfeld der Olympischen Spiele und der Paralympics in Paris in diesem Jahr hat Frankreich dafür gekämpft, die Befugnisse des Staates zu erhalten bzw. zu erweitern, um unsere Anonymität im öffentlichen Raum zu beseitigen. Durch Einsatz von undurchsichtigen und unzuverlässigen KI-Systemen, soll behauptet werden zu wissen, was wir denken. Die Regierungen der anderen Mitgliedstaaten und die federführenden Verhandlungsführer*innen des Parlaments haben darin versagt, das aufzuhalten.

Nach diesem neuen Gesetz gilt für uns alle: Schuldig durch den Algorithmus, bis unsere Unschuld bewiesen ist – und die EU wird biometrische Massenüberwachung durchgewunken haben. Damit wird den EU-Ländern ein Freibrief für die Ausweitung der Überwachung unserer Gesichter und Körper geben, was wiederum weltweit einen erschreckenden Präzedenzfall schaffen wird.

France becomes the first European country to legalise biometric surveillance

EDRi member and Reclaim Your Face partner La Quadrature du Net charts out the chilling move by France to undermine human rights progress by ushering in mass algorithmic surveillance, which in a shocking move, has been authorised by national Parliamentarians.


For three years, the EDRi network has rallied against biometric mass surveillance practices through our long-running Reclaim Your Face campaign. Comprised of around eighty civil society groups and close to 100 thousand European citizens and residents, our movement has rejected the constant tracking of our faces and bodies across time and place by discriminatory algorithms. We have called for lawmakers to protect us from being treated as walking barcodes, and the European Parliament is now poised to vote to ban at least some forms of biometric mass surveillance at EU level.

In contrast, EDRi member and Reclaim Your Face partner La Quadrature du Net (LQDN) charts out the chilling move by France to undermine human rights progress by ushering in mass algorithmic surveillance, which in a shocking move, has been authorised by national Parliamentarians.

The article 7 of the Law on Olympic Games’ organisation has been adopted by the national parliament, “Assemblée Nationale”, formalising the introduction of Algorithmic Video-Surveillance in French Law, until December 2024. Due to the fuss regarding the retirements reform and following an expedited-as-usual process, French government succeeded in making acceptable one of the most dangerous technology ever deployed. Using lies and fake storytelling, the government escaped from the technical, political and judicial consequences in terms of mass surveillance. Supported by MPs from the governmental majority and far-right parties, algorithmic video-surveillance has been legalised backed by lies, undermining always more the democratic game.

  • The lie of biometrics: The government repeated and wrote down in the law that algorithmic video-surveillance is not related to biometrics. This is entirely false. This technology constantly identifies, analyses, classifies the bodies, physical attributes, gestures, body shapes, gait, which are unquestionably biometric data. LQDN explained it (see note or video), tirelessly told rapporteurs in the Sénat and Assemblée Nationale, the representatives, along with 38 other international organisations and more than 40 MEPs (members of European Parliament) who recently called out the French government. Despite this, the French government stood with its lies, concerning technical as well as legal aspects. France is once again violating EU’s law, consecrating its title of Europe surveillance’s champion.
  • The lie of usefulness: The government used the Olympic games as a pretext to achieve faster a long-running agenda of legalising these technologies. In fact, this choice is just keeping to a “tradition” widely observed of States profiting from international mega-events to pass exceptional laws. The government convinces people of the necessity to “spot suspicious packages” or “prevent massive crowd movements”. These events suddenly became the top priority for the Ministry of the Interior and deputies, who make the security of the Olympics just about these issues, which were rarely identified as a priority before. Also, these issues can also be resolved by human competency instead of these technologies, as LQDN have demonstrated in this article. Algorithmic video-surveillance acceptance relies on a well implanted myth according to which technology would magically ensure security. In this way, these opaque technologies are deemed useful without any honest evaluation or demonstration.
  • The technical lie: Algorithmic video-surveillance’s main application is to identify behaviors, pre-defined as “suspicious” by the police. Arbitrary and dangerous by design, the way these algorithms work has never been explained by the government: because it is not understood by most of those who decide Whether because of inexcusable incompetence or assumed diversion, in the end, the level of the parliamentary debate was extremely low, and certainly not what was expected given the dramatic issues raised by these biometric technologies.Helped by Guillaume Vuillemet and Sacha Houlié, both from the governmental party and some other MPs, what dominated the parliamentary debate was a minimisation rethoric directly inspired from surveillance companies’ marketing narratives, along with lies and technical nonsense. It clearly shows the incapacity of the Parliament to discuss technical questions. Moreover, society should legitimately fear the future, considering how Parliamentary representatives are unable to apprehend the threats of emerging technologies.

As police brutalities overwhelm people’s screens, and as the police, armed with clubs, assures the after-sales service of the most unpopular reforms, the increasing police surveillance is part of a global strategy to stifle any contestation.

Such tactics allowing the State to transform the reality of its surveillance prerogatives must be denounced. Particularly in a context where the meaning of words are deliberately twisted to make people believe that “surveillance is protection”,“security is freedom”, “democracy means forcing them through”. It is necessary to expose and counter this fake democratic game, and to question the extraordinary powers given to the French police. No need to talk about a “Chinese” dystopia to realise the height of the stakes. One can look at France’s history and present political climate to take the measure and understand the twenty years long security drift: more cameras, surveillance and databases, while depoliticising social stakes, and a loss of sense amongst politicians in charge. As a result, the Olympics’ law debates shed the light on the political disorientation of decision-makers, unable to question these security issues.

This first legalisation of automated video-surveillance is a winning step for the French security industry. Those who’ve been asking for years to test their algorithms on the people, to improve them and sell the technologies worldwide, are now satisfied. Soon, Thales, XXII, Two-I and Neuroo, will be allowed to sell biometric softwares to other states, like Idemia sold its facial recognition software to China. The startup XXII couldn’t even wait the law to be voted to loudly announce it raised 22 million euros to become, in its own words, “the European leader” of algorithmic video-surveillance.

The institutions in charge of preserving liberties, such as the Commission nationale de l’informatique et des libertés (CNIL), are totally failing. Created in 1978 and gifted with real and efficient counter powers to document the governmental surveillance ambitions, it is now the after-sales service of governmental regulations and carefully helps companies to implement “good” surveillance, in order to preserve the industry’s economic interests, without any consideration about collective rights and liberties.

This first legal authorisation creates a precedent and opens the gates to every other biometric surveillance technology: algorithmic audio-surveillance, facial recognition, biometric tracking, and more.

LQDN won’t give up the fight and will keep denouncing all of the government’s lies. They will be present as soon as the first experimentations will start and document the inevitable abuses these technologies lead to. They will find ways to contest them in courts and will fight for these experiments to remain temporary. And they’ll keep refusing these technologies and the technopolicing they incorporate, by fighting at the European level to obtain their interdiction.

Left-leaning French lawmakers are planning to challenge the adoption of this bill in the country’s top constitutional court.

This was first published by La Quadrature du Net. Read it in French.

Protect My Face: Brussels residents join the fight against biometric mass surveillance

The newly-launched Protect My Face campaign gives residents of the Brussels region of Belgium the opportunity to oppose mass facial recognition. EDRi applauds this initiative which demands that the Brussels Parliament ban these intrusive and discriminatory practices.


Eight Brussels-based organisations working across human rights and anti-surveillance have come together to launch Protect My Face. This regional campaign focusing on Brussels calls for an explicit ban on facial recognition. Among the NGOs responsible for this action are two of Belgium’s leading human rights groups: EDRi member the Liga voor Mensenrechten, and our Reclaim Your Face partner the Ligue des droits humains. For many years we have worked together to call for a ban on biometric mass surveillance across Europe – a demand which now sees unprecedented support from politicians in the European Parliament.

As one of the official seats of the European Parliament, Brussels is in some ways the beating heart of democracy in Europe. Yet with almost no transparency or oversight, people around the region have been the victims of secretive, disproportionate and rights-violating uses of facial recognition for many years. Federal police subjected people to unlawful facial recognition at the Brussels Zaventem airport in 2017 and 2019. And despite a warning from the police oversight board, the federal police also carried our several searches using the controversial Clearview AI facial recognition software in recent years.

Through the long-running Reclaim Your Face campaign, EDRi and our partners have long argued that facial recognition and other forms of biometric mass surveillance, which use our faces and bodies against us, pose an unacceptable risk to our rights and freedoms. They create the possibility to permanently track and monitor us in public spaces, and can particularly affect our right to demonstrations because of how they create a ‘chilling effect’. Biometric mass surveillance also poses a high risk of discrimination, being even more harmful for racialised people, queer people, homeless people and other minoritised groups.

This new petition is the first step in a regional campaign which gives the power to Brussels residents to demand action from the Brussels Parliament to protect our faces. In particular, the petition calls for the Parliament to ban facial recognition in public places and for identification purposes, and to grant the NGOs a hearing before the Parliament. This is an important chance to put a stop to these discriminatory, intrusive technologies of mass surveillance.

Are you a resident of the Brussels region? Join the fight against biometric mass surveillance by signing the new petition by the Protect My Face coalition:

Read more

Our movement gathered in Brussels

Between 6 and 9 November 2022, more than 20 activists from across Europe gathered in Brussels to celebrate the successes of the Reclaim You Face movement. We got to meet each other in real life after months of online organising, reflected on our wide range off decentralised actions, and learned from each other how to couple grassroots organising with EU advocacy aimed at specific events and EU institutions. Read on to see what we did.

“It’s unbelievable we did all this.”

was the summary of the event, as rightfully pointed out by Andrej Petrovski of SHARE Foundation.
Read More


ReclaimYourFace is a movement led by civil society organisations across Europe:

Access Now ARTICLE19 Bits of Freedom CCC Defesa dos Direitos Digitais (D3) Digitalcourage Digitale Gesellschaft CH Digitale Gesellschaft DE Državljan D EDRi Electronic Frontier Finland epicenter.works Hermes Center for Transparency and Digital Human Rights Homo Digitalis IT-Political Association of Denmark IuRe La Quadrature du Net Liberties Metamorphosis Foundation Panoptykon Foundation Privacy International SHARE Foundation
In collaboration with our campaign partners:

AlgorithmWatch AlgorithmWatch/CH All Out Amnesty International Anna Elbe Aquilenet Associazione Luca Coscioni Ban Facial Recognition Europe Big Brother Watch Certi Diritti Chaos Computer Club Lëtzebuerg (C3L) CILD D64 Danes je nov dan Datapanik Digitale Freiheit DPO Innovation Electronic Frontier Norway European Center for Not-for-profit Law (ECNL) European Digital Society Eumans Football Supporters Europe Fundación Secretariado Gitano (FSG) Forum InformatikerInnen für Frieden und gesellschaftliche Verantwortung Germanwatch German acm chapter Gesellschaft Fur Informatik (German Informatics Society) GONG Hellenic Association of Data Protection and Privacy Hellenic League for Human Rights info.nodes irish council for civil liberties JEF, Young European Federalists Kameras Stoppen Ligue des droits de L'Homme (FR) Ligue des Droits Humains (BE) LOAD e.V. Ministry of Privacy Privacy first logo Privacy Lx Privacy Network Projetto Winston Smith Reporters United Saplinq Science for Democracy Selbstbestimmt.Digital STRALI Stop Wapenhandel The Good Lobby Italia UNI-Europa Unsurv Vrijbit Wikimedia FR Xnet


Reclaim Your Face is also supported by:

Jusos Piratenpartei DE Pirátská Strana

MEP Patrick Breyer, Germany, Greens/EFA
MEP Marcel Kolaja, Czechia, Greens/EFA
MEP Anne-Sophie Pelletier, France, The Left
MEP Kateřina Konečná, Czechia, The Left



Should your organisation be here, too?
Here's how you can get involved.
If you're an individual rather than an organisation, or your organisation type isn't covered in the partnering document, please get in touch with us directly.