News




Surveilling Europe’s edges: when research legitimises border violence

In May 2024, EDRi member Access Now’s Caterina Rodelli travelled across Greece to meet with local civil society organisations supporting migrant people and monitoring human rights violations, and to see first-hand how and where surveillance technologies are deployed at Europe’s borders.

Pushback practices have been recorded in Evros since the late 1980s

In the second installment of a three-part blog series, EDRi member Access Now’s Caterina Rodelli explains how EU-funded research projects on border surveillance are legitimising violent migration policies. Catch up on part one here.

As I drove along Greece’s E85 highway from the coastal city of Alexandroupoli towards the northernmost city of Orestiada, a grey wall appeared as I passed the small town of Soufli, known for its silk industry. The more I drove northwards, the more the wall unravelled itself, its sharpened edges and surveillance towers spreading down the road.

This wall runs along the Evros river, which marks the 200 kilometre natural border between Türkiye and Greece. It was first built in 2011 as a wire fence, after the Greek government, inspired by the US-Mexico border, ordered its construction as a purported solution to keep out migrants seeking safety. Other walls have since been erected in the Evros region; physical manifestations of the “embeddedness and longevity of border violence.” Pushback practices — i.e. the informal and illegal expulsion of people across borders without due process — have been an integral element at many European borders since the early 2000s and have been recorded in Evros since the late 1980s.

Human rights groups, journalists, and researchers have documented several instances of abuse with sometimes fatal consequences

Human rights groups, journalists, and researchers have documented several instances of abuse in this part of Greece: from trapping people in houses where they are subjected to degrading and violent treatment and pushing back migrants with sometimes fatal consequences, including the reported drowning of a four-year-old child, to instances of sexual assault.

Moreover, both migrants and those who defend their rights have been targeted with multiple smear campaigns, including one that blamed migrants for sparking wildfires that devastated more than 3002 miles of land in 2023.

Meanwhile in the same region, the EU has been a long-time supporter of multiple research projects on border surveillance. As part of the now-defunct Horizon 2020 and ongoing Horizon Europe schemes, the EU has funded programmes testing integrated advanced border surveillance technologies, such as thermal imaging cameras, radio frequency analysis systems, mobile unmanned vehicles (including drones, and autonomous ground / sea vehicles), or pylons mounted with tracking sensors.

Researcher Lena Karamanidou has been monitoring the development of these research projects, specifically in the Evros river’s delta, as part of the Border Violence Monitoring Network’s (BVMN) work on border surveillance technologies. They have found evidence that two Horizon 2020 projects, Nestor and Andromeda, have been tested in the area and used by local police authorities for real-life border control activities, outside of the research framework. These projects, which claimed to improve border surveillance solutions, were implemented by a pan-European consortium of public bodies, private companies, and research institutes. Nestor focused on testing the “next generation holistic border surveillance system providing pre-frontier situational awareness beyond maritime and land border areas,” while Andromeda tested solutions for improving information sharing among authorities engaged in border management.

The research projects in the Evros area raise two important questions

The roll-out of these research projects in the Evros area, where violence against migrant people is a near daily occurrence, raise two important questions:

  1. What exactly does the EU mean when it talks about safety?  The use of border surveillance technology is often justified under safety pretexts; however, several examples show that it has instead been used to facilitate push-backs in the Mediterranean sea or to conduct enforced disappearances at external borders. Given that the Evros river is an extremely surveilled and violent area, it is clear that this approach to safety does not include migrants’ safety, nor does it guarantee respect for the human rights of all.
  2. What is the role of research in legitimising policies of border violence?
    In the same way that biased research and pseudoscience has historically been used to justify racial discrimination, there is a high risk that EU’s funding of public research into border surveillance could lend a veneer of scientific objectivity to some of its violent migration practices, and policies. Tested across the EU, these programmes usually focus on demonstrating the systems’ accuracy, without questioning the problematic underlying assumption that migrant and racialised people are an inherent threat to Europe.

What European policymakers need to know — and do

Surveillance does not equal safety for all. While surveillance technology is deployed on security grounds, violence against migrant people continues daily. In addition, many of the EU-funded research projects testing and deploying this technology are doing so outside of any legislative framework, which further reduces accountability. EU policymakers must:

  • Stop using EU public research funds for border surveillance testing programmes;
  • Ensure transparency when the technology tested in any research projects is subsequently used in the field; and
  • Commit to ending any inhumane treatment and violations of migrants’ human rights at Europe’s borders.

For any questions regarding this blog, please contact caterina@accessnow.org. If you or your organisation needs digital security support, please contact help@accessnow.org.

This article was first published here by EDRi memberAccess Now.

This is the second instalment of a three-part blog series. Catch up on part one here.

Contribution by: Caterina Rodelli, EU Policy Analyst, EDRi member, Access Now

Biometric surveillance in the Czech Republic: the Ministry of the Interior is trying to circumvent the Artificial Intelligence Act

EDRi-member Iuridicum Remedium draws attention to the way biometric surveillance at airports should be legalised in the Czech Republic. According to the proposal, virtually anyone could become a person under surveillance. Moreover, surveillance could be extended from airports to other public spaces.

In September, the Czech government adopted a proposal by the Ministry of the Interior to legalise automated facial recognition at international airports. The government approved the proposal with changes: the time limit on the retention of unrecognised data on all airport visitors will be reduced from 90 to 30 days.

However, the main problem regarding the judicial authorisation to include a person in the reference database remains. According to the EU Artificial Intelligence (AI) Act, the use of a facial recognition system on a specific person must be authorised by a court or other independent authority. But according to the Czech Ministry of the Interior, the court could also authorise entire “predefined categories of persons”. With such a vague definition, virtually anyone can be included in the database.

This effectively denies the basic control mechanism that the complexly negotiated AI Act brought. The aim is clearly to keep the system at the airport running in essentially the same way as it has been operating since 2018, even if it is – according to our analysis – contrary to the law.

The current proposal only concerns the airport. However, the Ministry did not originally envisage such a restriction at all. It was only after comments from civil society that the law was limited to the use of airport systems. However, the explanatory memorandum shows that the ministry is certainly not opposed to extending the systems to other public spaces.

IuRe have therefore launched a special campaign against biometric surveillance. The website The Czech Republic is not China (only in Czech) introduces the public to the current issue in the form of a quiz. It has been shown in the past that changes can be achieved with the right social pressure, and hopefully it will be possible to do so again.

Contribution by: Hynek Trojánek, EDRi-member, Iuridicum Remedium

EDRi and Reclaim Your Face campaign recognised as Europe AI Policy leaders

EDRi and the Reclaim Your Face coalition were recognised as the Europe AI Policy Leader in Civil Society for our groundbreaking work as a coalition to advocate for a world free from biometric mass surveillance.

Last week, on 21 May 2024, EDRi was awarded as the Europe AI Policy Leader in Civil Society by the Center for AI and Digital Policy Europe. The honour was specifically bestowed upon EDRi’s Reclaim Your Face campaign, which started in November 2020. Through this campaign, we called for for an end to biometric mass surveillance practices in Europe, including through the EU Artificial Intelligence (AI) Act.

EDRi's Andreea Belu and Ella Jakubowska receiving the award at CPDP
EDRi’s Ella Jakubowska and Andreea Belu receiving the Europe AI Policy Leader award at CPDP 2024 opening ceremony.

This honour is a recognition of the work done by the coalition – over 110 organisations across 25 EU countries, hundreds of volunteers, and many actions taken by hundreds of thousands of supporters all over Europe. The award was presented in Brussels during the opening ceremony of CPDP Conference, and received on behalf of the coalition by Ella Jakubowska and Andreea Belu of the EDRi Brussels Office.

Another recognition for the Reclaim Your Face campaign: Digital civic engagement honourary award

This honour adds to a previous recognition of the campaign’s achievements since it started. In December 2022, the Reclaim Your Face coalition received the honorary award for Digital civic engagement – a cooperation between LOAD e.V., the Friedrich Naumann Foundation for Freedom and the Thomas Dehler Foundation. The coalition was represented at the ceremony in Munich, Germany by Andreea Belu (EDRi Brussels office), Matthias Marx (Chaos Computer Club), and Konstantin Macher (formerly from Digitalcourage).

Andreea Belu (EDRi Brussels office – centre), Matthias Marx (Chaos Computer Club – left), and Konstantin Macher (formerly from Digitalcourage- right) receiving the award in Munich, Germany in December 2022.

What’s next for our fight against biometric mass surveillance?

We’re grateful for the recognition of our work, and the honour of being able to represent over a million people across the EU who want a ban on intrusive surveillance practices. Despite the disappointment of the final Artificial Intelligence (AI) Act, which is full of holes when it comes to bans on different forms of biometric mass surveillance (BMS), we’re looking to the future. There are some silver linings in the legislation which give us opportunities to oppose BMS in public spaces and to push for better protection of people’s sensitive biometric data. Read more about how we’re planning to continue fighting for a world free from biometric mass surveillance. You can also look at our living legal and practical guide for civil society organisations, academics, communities and activists, which charts a human rights-based approach for how to keep resisting BMS practices now and in the future.

The future of our fight against biometric mass surveillance

The final AI Act is disappointingly full of holes when it comes to bans on different forms of biometric mass surveillance (BMS). Despite this, there are some silver linings in the form of opportunities to oppose BMS in public spaces and to push for better protection of people’s sensitive biometric data.

Throughout spring 2024, European Union (EU) lawmakers have been taking the final procedural steps to pass a largely disappointing new law, the EU Artificial Intelligence (AI) Act.

This law is expected to come into force in the summer, with one of the most hotly-contested parts of the law – the bans on unacceptably harmful uses of AI – slated to apply from the end of 2024 (six months and 20 days after the legal text is officially published).

The first draft of this Act, in 2021, proposed to ban some forms of public facial recognition, showing that lawmakers were already listening to the demands of our Reclaim Your Face campaign. Since then, the AI Act has continued to be a focus point for our fight to stop people being treated as walking barcodes in public spaces.

But after a gruelling three-year process, AI Act negotiations are coming to an underwhelming end, with numerous missed opportunities to protect people’s rights and freedoms or to uphold civic space.

One of the biggest problems we see is that the bans on different forms of biometric mass surveillance, or BMS, are full of holes. BMS is the term we’ve used as an umbrella for different methods of using people’s biometric data to surveil them in an untargeted or arbitrarily-targeted way – which have no place in a democratic society.

At the same time, all is not lost. As we get into the nitty-gritty of the final text, and reflect on the years of hard work, we mourn the existence of the dark clouds – and we celebrate the silver linings and the opportunities they create to better protect people’s sensitive biometric data.

Legitimising biometric mass surveillance

Whilst the AI Act is supposed to ban a lot of unacceptable biometric practices, we’ve argued since the beginning that it could instead become a blueprint for how to conduct BMS.

As we predicted, the final Act takes a potentially dystopian step towards legalising live public facial recognition – which so far has never been explicitly allowed in any EU country. The same goes for pseudo-scientific AI ‘mind-reading’ systems, which the AI Act shockingly allows states to use in policing and border contexts. Using machines to categorise people’s gender and other sensitive characteristics, based on how they look, is also allowed in several contexts.

We have long argued that these practices can never be compatible with our fundamental rights to dignity, privacy, data protection, free expression and non-discrimination. By allowing them in a range of contexts, the AI Act legitimises these horrifying practices.

Reasons for hope

Yet whilst the law falls far short of the full ban on biometric mass surveillance in public spaces that we called for, it nevertheless offers several points to continue our fight in the future. To give one example, we have the powerful opportunity to capitalise on the wide political will in support of our ongoing work against BMS to make sure that the AI Act’s loopholes don’t make it into national laws in EU member states.

Our upcoming ‘Legal and practical guide to fighting BMS after the AI Act’ is therefore intended to inform and equip those who are reflecting and re-fuelling for the next stage in the fight against BMS.

This guide will lay out where we can use the AI Act’s opportunities to fight for better protections for our rights to exist free from BMS in public spaces. This includes charting out more than 10 specific advocacy opportunities including formal and informal spaces to influence, and highlighting the parts of the legal text that create space for our advocacy efforts.

A precedent for banning dangerous AI

We also remind ourselves that whilst the biometrics bans have been dangerously watered down, the Act nevertheless accepts that we must ban AI systems that are not compatible with a democratic society. This idea has been a vital concept for those of us working to protect human rights in the age of AI, and we faced a lot of opposition on this point from industry and conservative lawmakers.

This legal and normative acceptance of the need for AI bans has the potential to set an important global precedent for putting the rights and freedoms of people and communities ahead of the private interests of the lucrative security and surveillance tech industry. The industry wants all technologies and practices to be on the table – but the AI Act shows that this is not the EU’s way.

By Ella Jakubowska, Head of Policy, EDRi

The colonial biometric legacy at heart of new EU asylum system

On Wednesday (10 April), the EU is set to vote on a new set of asylum and migration reforms. Among the many controversial changes proposed in the new migration pact, one went almost unnoticed — a seemingly innocent reform of the EU’s asylum database, EURODAC. Although framed as purely technical adjustments, the reality is far more malicious. The changes to EURODAC will massively exacerbate violence against people on the move.

On Wednesday (10 April), the EU is set to vote on a new set of asylum and migration reforms. Among the many controversial changes proposed in the new migration pact, one went almost unnoticed — a seemingly innocent reform of the EU’s asylum database, EURODAC.

Although framed as purely technical adjustments, the reality is far more malicious. The changes to EURODAC will massively exacerbate violence against people on the move.

Reform of this 20 year-old database will make it the technological sword of EU’s hostile asylum and border policies. It will harness the most nefarious surveillance technologies that exist to date — namely the capture, processing and analysis of biometric data — and enable EU states to have full control over migrants’ body and movements.

With the collection of biometrics, the body has already become a “passport” for many. Biometrics is the process of making data out of a person’s biological or physiological characteristics. Fingerprints, facial images and iris scans are among the forms of biometrics most widely used by states to uniquely identify a person.

Historically, the identification of every single individual is key to the organisation of state control and domination over the population. In particular, it allows state authorities to track, monitor and restrict people’s movements.

It’s no surprise that biometrics are becoming the centrepiece of states’ expanding technological surveillance systems and, it’s even less surprising, that they’re a part of migration control policies. The very origin of biometric surveillance stems from colonial practices of dominating and discriminating against certain groups of people.

All the way back to the slave trade

The transatlantic slave trade developed technologies to mark, identify and track African people as captives and property at a global scale. Forensic identification methods which include detailed description of facial and bodily features as well as inked fingerprints and photographs of criminal suspects — were mostly applied in the French Empire’s colonies to guarantee order and the continuity of the colonial regime.

Likewise, British colonists ran the first large-scale biometric identity programme involving fingerprinting for controlling people in India. Thus, biometric registration as a replacement for documents and identity proof first became a reality for Black, brown and Asian bodies, especially those who were on the move.

The EU’s policies are just a continuation of this draconian history. Its first centralised biometric database, the European Asylum Dactyloscopy Database (EURODAC), was built to control secondary movements of asylum-seekers within the EU and to register people who irregularly cross external borders.

With the ongoing reform of EURODAC, the mass and routine identification of asylum-seekers, refugees and migrants through biometric data processing will become the building block of the EU’s inhumane asylum system.

The proposed reform is presented as a “mere technicality”, yet its transformation is in fact highly political — it will codify in technology the violent treatment given to migrants in the EU. This means systematic criminalisation, detention in prison-like conditions and swift expulsion.

One of the proposed reforms that expands the scope of EURODAC is to capture of people’s facial images in addition to fingerprints.

Collecting additional biometric data has been justified by policymakers because it was reported that some asylum seekers voluntarily burn or damage their fingers to alter their fingerprints and avoid identification.

For people on the move, identification implies an imminent risk of being detained, sent back to another EU state they left — usually because of dreadful reception conditions and few opportunities for integration — or to be deported to so-called “safe third countries” where they risk persecution and torture. Instead of seeing people forced to harm themselves to avoid identification as a sign that migration policies need to be more humane, the EU has decided to further surveil and terrorise migrants.

EURODAC is also being turned into a mass surveillance tool by targeting even more groups of people than before — including children as young as six.

Despite some weak attempts to require that the data collection is done in a ‘child-friendly’ manner, it will not change the outcome — children will be subjected to a seriously invasive and unjustified procedure that de facto stigmatise them.

Consider that in the EU, children younger than 16 are not even able to freely consent to the processing of their personal data under the General Data Protection Regulation (GDPR). Meanwhile, migrant children will have their faces scanned and fingerprints taken in border camps and detention centres.

Also, police authorities will be able to access the EURODAC data without having to meet almost any conditions — treating all asylum-seekers and refugees with a presumption of illegality.

EU’s racist double standards exemplified

The use of biometric surveillance in EURODAC has just one explicit purpose — to increase power and control over migrants who have been made socially vulnerable by unfair migration policies and practices. It is intrusive, disproportionate, and contradicts Europe’s own data protection golden standards.

The EU is currently building a regime of exception within its own legal framework for privacy and data protection, in which people on the move gets a differentiated treatment.

The EURODAC reform also demonstrates a larger trend in Europe of increasing criminalisation and the logic of ‘policing’. The EU blends migration management and the fight against crime by equating people seeking safety with security threats. This criminalisation lens leads to discriminatory assumptions and associations, resulting in racialised people and migrants being over-surveilled and targeted.

With the massive expansion of centralised databases, EURODAC being a prime example, the EU can no longer hide its racist double standards.

This article was first publishe here by EUobserver.

Contribution by: Laurence Meyer, Racial and social justice lead, EDRi member, Digital Freedom Fund (DFF) & Chloé Berthélémy, Senior Policy Advisor, EDRi

EU’s AI Act fails to set gold standard for human rights

A round-up of how the EU Artificial Intelligence (AI) Act fares against the collective demands of a broad civil society coalition that advocated for prioritising the protection of fundamental human rights in the law.

For the last three years, EDRi has worked in coalition with a broad range of digital, human rights and social justice groups to demand that artificial intelligence (AI) works for people, prioritising the protection of fundamental human rights. We have put forward our collective vision for an approach where “human-centric” is not just a buzzword, where people on the move are treated with dignity, and where lawmakers are bold enough to draw red lines against unacceptable uses of AI systems.

Following a gruelling negotiation process, EU institutions are expected to conclusively adopt the final AI Act in April 2024. But while they celebrate, we take a much more critical stance. We want to highlight the many missed opportunities to make sure that our rights to privacy, equality, non-discrimination, the presumption of innocence and many other rights and freedoms are protected when it comes to AI. Here’s our round-up of how the final law fares against our collective demands.

Please note that this analysis is based on the latest available version of the AI Act text, dated 6 March 2024. There may still be small changes made before the law’s final adoption.

We called on EU lawmakers to empower affected people by upholding a framework of accountability, transparency, accessibility and redress. How did they do?

Some accessibility barriers have been broken down, but more needs to be done:

  • Article 16 (ja) of the AI Act fulfills our call for accessibility by stating that high-risk AI systems must comply with accessibility requirements. However, we still believe that this should be extended to apply to low and medium risk AI systems as well, in order to ensure that the needs of people with disabilities are central in the development of all AI systems which could impact them.

More transparency about certain AI deployments but big loopholes for the private sector and security agencies:

  • The AI Act establishes a publicly-accessible EU database to provide transparency about AI systems that pose higher risks to people’s rights or safety. While originally only providers of high-risk AI systems were subject to transparency requirements, we successfully persuaded decision-makers that deployers of AI system – those who actually use the system – shall also be subject to transparency obligations.
  • Those providers and deployers will be subject to transparency obligations who put on the market or use AI systems in high-risk areas – such as in the areas of employment and education – as designated by Annex III. Providers will be required to register their high-risk system in the database and to enter information about it such as the description of its intended purpose, concise description of the information used by the system and its operating logic. Deployers of high risk AI systems who are public authorities – or those acting on their behalf – will be obliged to register the use of the system. They will be required to enter information in the database such as a summary of the findings of a fundamental rights impact assessment (FRIA) and a summary of the data protection impact assessment.  However, deployers of high-risk AI systems in the private sector area will not be required to register the use of high-risk systems – another critical issue
  • The major shortcoming of the EU database is that negotiators agreed on a carve-out for law enforcement, migration, asylum and border control authorities. Providers and deployers of high-risk systems in these areas will be requested to register only a limited amount of information, and only in a non-publicly accessible section of the database. Certain important pieces of information, such as the training data used, will not be disclosed at all. This will prevent affected people, civil society, journalists, watchdog organisations and academics to exercise public scrutiny in these high-stake areas which are prone to fundamental rights violation, and hold them accountable.

Fundamental rights impact assessments are included, but concerns remain about how meaningful they will be:

  • We successfully convinced EU institutions of the need for fundamental rights impact assessments (FRIAs). However, based on the final AI Act text, we have doubts whether it will actually prevent human rights violations and serve as a meaningful tool of accountability. We see three primary shortcomings:
  1. Lack of meaningful assessment and the obligation to prevent negative impacts: while the new rules require deployers of high-risk AI systems to list risks of harm to people, there is no explicit obligation to assess whether these risks are acceptable in light of fundamental rights law, nor to prevent them wherever possible. Regrettably, deployers only have to specify which measures will be taken once risks materialise, likely once the harm has already been done
  2. No mandatory stakeholder engagement: the requirement to engage external stakeholders, including civil society and people affected by AI, in the assessment process was also removed from the article at the last stages of negotiations. This means that civil society organisations will not have a direct, legally-binding way to contribute to impact assessments
  3. Transparency exceptions for law enforcement and migration authorities: while in principle, deployers of high-risk AI systems will have to publish the summary of the results of FRIAs, this will not be the case for law enforcement and migration authorities. The public will not even have access to mere information that an authority uses a high-risk AI system in the first place. Instead, all information related to the use of AI in law enforcement and migration will only be included in a non-public database, severely limiting constructive public oversight and scrutiny. This is a very concerning development as, arguably, the risks to human rights, civic space and rule of law are the most severe in these two areas. Moreover, while deployers are obliged to notify the relevant market surveillance authority of the outcome of their FRIA, there is an exemption to comply with this obligation to notify for ‘exceptional reasons of public security’. This excuse is often misused as a justification to carry on disproportionate policing and border management activities.

When it comes to complaints and redress, there are some remedies, but no clear recognition of “affected person”:

  • Civil society has advocated for robust rights and redress mechanisms for individuals and groups affected by high-risk AI systems. We have demanded the creation of a new section titled ‘Rights of Affected Persons’, which would delineate specific rights and remedies for individuals impacted by AI systems. However, the section has not been created but instead, we have a “remedies” chapter that includes only some of our demands;
  • This chapter of remedies includes the right to lodge complaints with a market surveillance authority, but lacks teeth, as it remains unclear how effectively these authorities will be able to enforce compliance and hold violators accountable. Similarly, the right to an explanation of individual decision-making processes, particularly for AI systems listed as high-risk, raises questions about the practicality and accessibility of obtaining meaningful explanations from deployers. Furthermore, the effectiveness of these mechanisms in practice remains uncertain, given the absence of provisions such as the right to representation of natural persons, or the ability for public interest organisations to lodge complaints with national supervisory authorities.

The Act allows a double standard when it comes to the human rights of people outside the EU:

  • The AI Act falls short of civil society’s demand to ensure that EU-based AI providers whose systems impact people outside of the EU are subject to the same requirements as those inside the EU. The Act does not stop EU-based companies from exporting AI systems which are banned in the EU, therefore creating a huge risk of violating rights of people in non-EU countries by EU-made technologies that are essentially incompatible with human rights. Additionally, the Act does not require exported high-risk systems to follow the technical, transparency or other safeguards otherwise required when AI systems are intended for use within the EU, again risking the violation of rights of people outside of the EU by EU-made technologies.

Secondly, we urged EU lawmakers to limit harmful and discriminatory surveillance by national security, law enforcement and migration authorities. How did they do?

The blanket exemption for national security risks undermining other rules:

  • The AI Act and its safeguards will not apply to AI systems if they are developed or used solely for the purpose of national security, and regardless of whether this is done by a public authority or a private company. This exemption introduces a significant loophole that will automatically exempt certain AI systems from scrutiny and limit the applicability of human rights safeguards envisioned in the AI Act.
  • In practical terms, it would mean that governments could invoke national security to introduce biometric mass surveillance systems, without having to apply any safeguards envisioned in the AI Act, without conducting a fundamental rights impact assessment and without ensuring that the AI system meets high technical standards and does not discriminate against certain groups.
  • Such a broad exemption is not justified under EU treaties and goes against established jurisprudence of the European Court of Justice. While national security can be a justified ground for exceptions from the AI Act, this has to be assessed case-by-case, in line with the EU Charter of Fundamental Rights. The adopted text, however, makes national security a largely digital rights-free zone. We are concerned about the lack of clear national-level procedures to verify if the national security threat invoked by the government is indeed legitimate and serious enough to justify the use of the system and if the system is developed and used with respect for fundamental rights. The EU has also set a worrying precedent regionally and globally; broad national security exemptions have now been introduced in the newly-adopted Council of Europe Convention on AI.

Predictive policing, live public facial recognition, biometric categorisation and emotion recognition are only partially banned, legitimising these dangerous practices:

  • We called for comprehensive bans against any use of AI that isn’t compatible with rights and freedoms – such as proclaimed AI ‘mind reading’, biometric surveillance systems that treat us as walking barcodes, or algorithms used to decide whether we are innocent or guilty. All of these examples are now partially banned in the AI Act, which is an important signal that the EU is prepared to draw red lines against unacceptably harmful uses of AI.
  • At the same time, all of these bans contain significant and disappointing loopholes, which means that they will not achieve their full potential. In some cases, these loopholes risk having the opposite effect from what a ban should: they give the signal that some forms of biometric mass surveillance and AI-fuelled discrimination are legitimate in the EU, which risks setting a dangerous global precedent.
  • For example, the fact that emotion recognition and biometric categorisation systems are prohibited in the workplace and in education settings, but are still allowed when used by law enforcement and migration authorities, signal that the EU’s will to test the most abusive and intrusive surveillance systems against the most marginalised in society.
  • Moreover, when it comes to live public facial recognition, the Act paves the way to legalise some specific uses of these systems for the first time ever in the EU – despite our analysis showing that all public-space uses of these systems constitute an unacceptable violation of everyone’s rights and freedoms.

The serious harms of retrospective facial recognition are largely ignored:

  • When it comes to retrospective facial recognition, this practice is not banned at all by the AI Act. As we have explained, the use of retrospective (post) facial recognition and other biometric surveillance systems (called ‘remote biometric identification’, or ‘RBI’ in the text) are just as invasive and rights-violating as live (real-time) systems. Yet the AI Act makes a big error in claiming that the extra time for retrospective uses will mitigate possible harms. While several lawmakers have argued that they managed to insert several safeguards, our analysis is that the safeguards are not meaningful enough and could be easily circumvented by police. In one place, the purported safeguard even suggests that simply the suspicion of any crime having taken place would be enough to justify the use of a post RBI system – a lower threshold than we currently benefit from now under EU data protection law.

People on the move are not afforded the same rights as everyone else, with only weak – and at times absent – rules on the use of AI at borders and in migration contexts:

  • In its final version, the EU AI Act sets a dangerous precedent for the use of surveillance technology against migrants, people on the move and marginalised groups. The legislation develops a separate legal framework for the use of AI by migration control authorities, in order to enable the testing and the use of dangerous surveillance technologies at the EU borders and disproportionately against racialised people.
  • None of the bans meaningfully apply to the migration context, and the transparency obligations present ad-hoc exemptions for migration authorities, allowing them to act with impunity and far away from public scrutiny.
  • The list of high-risk systems fails to capture the many AI systems used in the migration context, as it excludes dangerous systems such as non-remote biometric identification systems, fingerprint scanners, or forecasting tools used to predict, interdict, and curtail migration.
  • Finally, AI systems used as part of EU large-scale migration databases (e.g. Eurodac, the Schengen Information System, and ETIAS) will not have to be compliant with the Regulation until 2030, which gives plenty of time to normalise the use of surveillance technology.

Third, we urged EU lawmakers to push back on Big Tech lobbying; and to remove loopholes that undermine the regulation. How did they do?

The risk classification framework has become a self-regulatory exercise:

  • Initially, all use cases included in the list of high-risk applications would have had to follow specific obligations. However, as a result of heavy industry lobbying, providers of high-risk systems will be now able to decide if their systems is high-risk or not, as an additional “filter” was added into that classification system.
  • Providers will still have to register sufficient documentation in the public database to explain why they don’t consider their system to be high-risk. However, this obligation will not apply when they are providing systems to law enforcement and migration authorities. This will pave the way for the free and deregulated procurement of surveillance systems in the policing and border contexts.

The Act takes only a tentative first step to address environmental impacts of AI:

  • We have serious concerns about how the exponential use of AI systems can have severe impacts on the environment, including through resource consumption, extractive mining, and energy-intensive processing. Today, information on the environmental impacts of AI is a closely-guarded corporate secret. This makes it difficult to assess the environmental harms of AI and to develop political solutions to reduce carbon emissions and other negative impacts.
  • The first draft of the AI Act completely neglected these risks, despite civil society and researchers repeatedly calling for the energy consumption of AI systems to be made transparent. To address this problem, the AI Act now requires that providers of GPAI models that are trained with large amounts of data and consume a lot of electricity must document their energy consumption. The Commission now has the task of developing a suitable methodology for measuring the energy consumption in a comparable and verifiable way.
  • The AI Act also requires that standardised reporting and documentation procedures must be created to ensure the efficient use of resources by some AI systems. These procedures should help to reduce the energy and other resource consumption of high-risk AI systems during their life cycle. These standards are also intended to promote the energy-efficient development of general-purpose AI models.
  • These reporting standards are a crucial first step to provide basic transparency about some ecological impacts of AI, first and foremost the energy use. But they can only serve as a starting point for more comprehensive policy approaches that address all environmental harms along the AI production process, such as water and minerals. We cannot rely on self-regulation, given how fast the climate crisis is evolving.

What’s next for the AI Act?

The coming year will be decisive for the EU’s AI Act, with different EU institutions, national lawmakers and even company representatives setting standards, publishing interpretive guidelines and driving the Act’s implementation across the EU’s member countries. Some parts of the law – the prohibitions – could become operational as soon as November 2024. It is therefore vital that civil society groups are given a seat at the table, and that this work is not done in opaque settings and behind closed doors.

We urge lawmakers around the world who are also considering bringing in horizontal rules on AI to learn from the EU’s many mistakes outlined above. A meaningful set of protections must ensure that AI rules truly work for individuals, communities, society, rule of law, and the planet.

While this long chapter of lawmaking is now coming to a close, the next chapter of implementation – and trying to get as many wins out of this Regulation as possible – is just beginning. As a group, we are drafting an implementation guide for civil society, coming later this year. We want to express our thanks to the entire AI core group, who have worked tirelessly for over three years to analyse, advocate and mobilise around the EU AI Act. In particular, we thank the work, dedication and vision of Sarah Chander, of the Equinox Initiative for Racial Justice, for her leadership of this group in the last three years.(ECF)

Authors:

• Ella Jakubowska, EDRi
• Kave Noori, EDF
• Mher Hakobyan, Amnesty International
• Karolina Iwańska, ECNL
• Kilian Vieth-Ditlmann, AlgorithmWatch
• Nikolett Aszodi, AlgorithmWatch
• Judith Membrives Llorens, Lafede.cat / Algorights
• Caterina Rodelli, Access Now
• Daniel Leufer, Access Now
• Nadia Benaissa, Bits of Freedom
• Ilaria Fevola, Article 19



ReclaimYourFace is a movement led by civil society organisations across Europe:

Access Now ARTICLE19 Bits of Freedom CCC Defesa dos Direitos Digitais (D3) Digitalcourage Digitale Gesellschaft CH Digitale Gesellschaft DE Državljan D EDRi Electronic Frontier Finland epicenter.works Hermes Center for Transparency and Digital Human Rights Homo Digitalis IT-Political Association of Denmark IuRe La Quadrature du Net Liberties Metamorphosis Foundation Panoptykon Foundation Privacy International SHARE Foundation
In collaboration with our campaign partners:

AlgorithmWatch AlgorithmWatch/CH All Out Amnesty International Anna Elbe Aquilenet Associazione Luca Coscioni Ban Facial Recognition Europe Big Brother Watch Certi Diritti Chaos Computer Club Lëtzebuerg (C3L) CILD D64 Danes je nov dan Datapanik Digitale Freiheit DPO Innovation Electronic Frontier Norway European Center for Not-for-profit Law (ECNL) European Digital Society Eumans Football Supporters Europe Fundación Secretariado Gitano (FSG) Forum InformatikerInnen für Frieden und gesellschaftliche Verantwortung Germanwatch German acm chapter Gesellschaft Fur Informatik (German Informatics Society) GONG Hellenic Association of Data Protection and Privacy Hellenic League for Human Rights info.nodes irish council for civil liberties JEF, Young European Federalists Kameras Stoppen Ligue des droits de L'Homme (FR) Ligue des Droits Humains (BE) LOAD e.V. Ministry of Privacy Privacy first logo Privacy Lx Privacy Network Projetto Winston Smith Reporters United Saplinq Science for Democracy Selbstbestimmt.Digital STRALI Stop Wapenhandel The Good Lobby Italia UNI-Europa Unsurv Vrijbit Wikimedia FR Xnet


Reclaim Your Face is also supported by:

Jusos Piratenpartei DE Pirátská Strana

MEP Patrick Breyer, Germany, Greens/EFA
MEP Marcel Kolaja, Czechia, Greens/EFA
MEP Anne-Sophie Pelletier, France, The Left
MEP Kateřina Konečná, Czechia, The Left



Should your organisation be here, too?
Here's how you can get involved.
If you're an individual rather than an organisation, or your organisation type isn't covered in the partnering document, please get in touch with us directly.