News




How can you influence the AI Act in order to ban biometric mass surveillance across Europe?

The EU is currently negotiating the Artificial Intelligence (AI) Act. This future law offers the chance to effectively ban biometric mass surveillance. This article aims to offer an overview of how the EU negotiates its laws and the key AI Act moments in which people can make their voices heard.


Two months after Reclaim Your Face launched our European Citizens’ Initiative (February 2021), the EU proposed a new law: the AI Act. In April 2021, the draft law included a ban on some forms of biometric surveillance. Despite its shortcomings, the mere mention of the word “prohibit” in the draft law was a huge success for our campaign.

The AI Act draft showed that, if it wants, the EU has the power to truly ban biometric mass surveillance practices. As a result, we decided that the negotiations around this law will be crucial to make our Reclaim Your Face campaign demands real.

Most importantly, it showed that the calls launched by tens of thousands of people and civil society organisations across Europe since October 2020 have had a real impact.

How are EU laws negotiated?

The process of EU law-making can be difficult to grasp. The graphic below explains the role of the European Commission, the negotiations between the European Parliament and the Council of the EU, as well as the different actions we took/take during these steps. 

As you can see, the European Commission (EC) is the body that proposes a new EU law. After preparatory work, the EC writes up a draft, publishes and sends it to the Parliament and the Council. Both the Parliament and the Council debate internally. As a result, each of them will form a position on the EC draft. Next, they meet – together with the EC – in a negotiation step called ‘trilogues’. Unfortunately, trilogues are notorious for their opacity and lack of opportunities for public scrutiny

If you want to know more about where EU legislative and non-legislative Proposals come from, check EDRi’s Activist Guide to the Brussels Maze.

The role of the Parliament is crucial

The European Parliament is the only directly-elected EU body. For this reason, it is probably the body that most takes into account people’s voices. Influencing the opinion of the Parliament – before the trilogues start – is therefore a key component of civil society’s work on EU laws.

The European Parliament is formed of 705 Members (MEPs) from all 27 EU Member States. Most MEPs are also part of Parliament Committees. The Committees have a crucial role in forming the Parliament’s position.

One or more Committees are assigned to write a report that forms the basis of the entire European Parliament’s position. The AI Act is handled jointly in the Parliament by two Committees: LIBE (Civil Liberties, Justice and Home Affairs) and IMCO (Internal Markets and Consumer Protection). 

Each Committee has a Rapporteur (overall lead) and several Shadow Rapporteurs (lead for their political group). The important thing to remember is that these MEPs are all key players in shaping the report of the Parliament on the AI Act. They may also be influenced by other MEPs in their Committee(s), who can suggest changes to the draft report (“amendments”) as well as the heads of their political group. See more below.

When can people strategically influence the negotiations on the AI Act?

During the negotiations of the lead committees

he lead committees in the Parliament (IMCO and LIBE) are working on their report on the AI Act up until October 2022. This means that already we should raise awareness of our work and our demands to MEPs in those two committees. What are some crucial steps of the negotiation around the LIBE–IMCO Committee report?

First, the lead Rapporteurs Benifei and Tudorache publish an IMCO–LIBE draft report (expected April 2022) which represents the parts of the position on which they could agree. Afterwards, the other MEPs in IMCO and LIBE can propose amendments to this draft report, including the areas that need more democratic scrutiny. The tabling of amendments is expected to happen until 18 May 2022.

The amendments are then negotiated among a selected number of MEPs in the LIBE-IMCO committees (the Rapporteurs and Shadow Rapporteurs), and agreed upon by coming up with compromises. The negotiations around these amendments and the agreement on a compromise text for the LIBE–IMCO report are expected to happen between May and October 2022.

After lead committees conclude their report, and just before the Plenary vote

Once out of committee negotiations, this joint IMCO–LIBE report will be presented to the full Parliament, known as the Plenary. When the report is presented to the Plenary, there is also an opportunity for last-minute amendments to the committees’ report to be put forward. This tabling of amendments before the Plenary vote is yet another moment in which MEPs may introduce protections for people against biometric mass surveillance. 

After any final amendments to the IMCO–LIBE report are voted on, all 705 MEPs will vote on whether or not to accept this final version of the IMCO–LIBE report as the Parliament’s position. Currently, the 705 MEPs are scheduled to vote on the final report in November 2022.

In parallel, the Council of Member States is currently trying to make the in-principle ban on biometric surveillance from the Commission‘s draft weaker and more narrow. If the Parliament agrees on the need for a full ban on biometric mass surveillance practices, we have a chance to fight back against the Council’s proposal.

Supporters of Reclaim Your Face can play an important role in the negotiations of the EU’s Artificial Intelligence Act. Are you ready for bold, strategic and direct action? Subscribe to EDRi’s mailing list to be kept in the loop and follow our social media channels.

About ClearviewAI’s mockery of human rights, those fighting it, and the need for EU to intervene

Clearview AI describes itself as ‘The World’s Largest Facial Network’. However, a quick search online would reveal that the company has been involved in several scandals, covering the front page of many publications for all the wrong reasons. In fact, since New York Times broke the story about Clearview AI in 2020, the company has been constantly criticised by activists, politicians, and data protection authorities around the world. Read below a summary of the many actions taken against the company that hoarded 10 billion images of our faces.


How did Clearview AI build a database of 10 billion images?

Clearview AI gathers data automatically, through a process called (social media) online scraping. The specific way Clearview AI gathers its data enables biometric mass surveillance, being a practice also adopted by actors such as PimsEyes among others.

The company scrapes the pictures of our faces from the entire internet – including social media applications – and stores them on its servers. Following the gathering and storage of data through online scraping, Clearview AI managed to create a database of 10 BILLION images. Now, the company uses an algorithm that matches a given face to all the faces in its 10B database: (virtually) everyone and anyone.

Creepily enough, this database can be available to any company, law enforcement agency and government that can pay for access. 

This will go on, as long as we don’t put a stop to Clearview AI and its peers. Reclaim Your Face partners and other organisations have taken several actions to limit Clearview AI in France, Italy, Germany, Belgium, Sweden, the United Kingdom, Australia and Canada.

In several EU countries many activists, Data Protection Authorities and watchdogs took action.

In May 2021, a coalition of organisations (including noyb, Privacy International (PI), Hermes Center and Homo Digitalis) filed a series of submissions against Clearview AI, Inc. The complaints were submitted to data protection regulators in France, Austria, Italy, Greece and the United Kingdom.

Here are some of the Data Protection Authorities and watchdogs’ decisions:

France

Following Reclaim Your Face Partner’s Privacy International and individual complaints about Clearview AI’s facial recognition software, the French data protection authority (‘CNIL’) decided in December 2021 that Clearview AI should cease collecting and using data from data subjects in France.

Italy

After individuals (including Riccardo Coluccini) and Reclaim Your Face organisations (among them Hermes Centre for Transparency and Digital Human Rights and Privacy Network) filed complaint against Clearview AI, Italian’s data privacy watchdog (Garante per la Protezione dei dati personali) fined Clearview AI the highest amount possible: 20 million Euros. The decision includes an order to erase the data relating to individuals in Italy and banned any further collection and processing of the data through the company’s facial recognition system.

Germany

Following an individual complaint from Reclaim Your Face activist Matthias Marx, the Hamburg Data Protection Agency ordered Clearview to delete the mathematical hash representing a user’s profile. As a result, the Hamburg DPA deemed Clearview AI’s biometric photo database illegal in the EU. However, Clearview AI has only deleted Matthias Marx’s data and the DPA’s case is not yet closed.

Belgium

While the use of this software has never been legal in Belgium and after denying its deployment, the Ministry of the Interior confirmed in October 2021 that the Belgian Federal Police used the services of Clearview AI. This was derived from a trial period the company provided to the Europol Task Force on Victim Identification. Albeit admitting the use, the Ministry of Interior also confirmed and emphasized that Belgian law does not allow this. This was later confirmed by the Belgian police watchdog ruling that stated its use was unlawful.

Sweden

In February 2021, the Swedish data protection authority (IMY), decided that a Swedish local police’s use of Clearview’s technology involved unlawfully processed biometric data for facial recognition. The DPA also pointed the Police failed to conduct a data protection impact assessment. As such, the authority fined the local police authority €250,000 and ordered to inform people whose personal data was sent to the company.

Clearview AI is in trouble outside of the European Union too

United Kingdom 

On 27 May 2021, Privacy International (PI) filed complaints against Clearview AI with the UK’s independent regulator for data protection and information rights law and Information Commissioner’s Office (ICO). Jointly with OAIC, which regulates the Australian Privacy Act, conducted a joint investigation on Clearview AI from 2020. Last year, ICO announced its provisional intent to impose a potential fine of over £17 million based on Clearview’s failure to comply with UK data protection laws.

In May 2022, ICO issued a fine to Clearview AI Inc of £7,552,800 and an enforcement notice, ordering the company to stop obtaining and using the personal data of UK residents that is publicly available on the internet, and to delete the data of UK residents from its systems.

Australia 

On the other hand, OAIC has reached a decision and ordered the company to stop collecting facial biometrics and biometric templates from people in Australian territory; and to destroy all existing images and templates that it holds. 

Canada 

Canadian authorities were unequivocal in ruling that Clearview AI was a violation of their citizen’s right to privacy, and furthermore, that this use constitutes mass surveillance. Their statement highlights the clear link between ClearviewAI and biometric mass surveillance and assumes that all citizens are suspects of crime. 

War and Clearview AI

In an already distressing war context, Ukraine’s defence ministry is using Clearview AI’s facial recognition technology for allegedly vetting people at checkpoints, unmasking Russian assailants, combating misinformation and identifying the dead.

What happens when Clearview AI decides to offer its services to military forces with whom we disagree?

It is deeply worrying that Clearview AI’s technologies are reportedly being used in warfare. What happens when Clearview AI decides to offer its services to military forces with whom we disagree? What does this say about the geopolitical power we allow – as a society – for surveillance of private actors? By allowing Clearview AI into military operations, we are opening Pandora’s box for technologies that have been ruled incompatible with people’s rights and freedoms to be deployed into a situation of literal life-or-death. Clearview AI’s systems show documented racial bias and have facilitated several traumatic wrongful arrests of innocent people around the world. Even a system that is highly accurate in lab conditions will not perform as accurately in a war zone. This can lead to fatal results.

Clearview AI mocks national data protection authorities. We must act united! The EU must step up and use the AI Act to end the mocking of people’s rights and dignity.

Pressure is mounting, but Clearview AI is not stepping down. Instead, the company started as a service for law enforcement uses only, but is now telling investors they are extending towards the monitoring of gig workers – among others.

All the data protection authorities mentioned above have strong arguments to justify their decisions. In response to fines, Clearview AI’s CEO issues statements that mock all the fines national authorities issue. 

National agencies and watchdogs cannot reign in Clearview AI alone. The EU must step in and ensure Clearview AI does not also mock the fundamental nature of human rights. 

Reclaim Your Face campaigns to stop Clearview AI and any other uses of technology that create biometric mass surveillance. 

Further readings:

Italian DPA fines Clearview AI for illegally monitoring and processing biometric data of Italian citizens

Laura Carrer, Research and Advocacy at Digital Rights Unit, Hermes Center & Riccardo Coluccini, Reclaim Your Face national campaign contributor

On 9 March 2022 the Italian Data Protection Authority (DPA) fined the US-based company Clearview AI EUR 20 million after finding that the company monitored and processed biometric data of individuals on Italian territory without a legal basis.

The company reportedly owns a database including over 10 billion facial images which are scraped from public web sources such as websites, social media, online videos. It offers a sophisticated search service which creates profiles on the basis of the biometric data extracted from the image.

The fine is the highest expected according to the General Data Protection Regulation (GDPR), and it was motivated by a complaint sent by the Hermes Centre in May 2021 in a joint action with EDRi members Privacy International, noyb, and Homo Digitalis—in addition to complaints sent by some individuals and to a series of investigations launched in the wake of the 2020 revelations of Clearview AI business practices.

In addition to the fine, the Italian DPA ordered the company to delete personal and biometric data relating to individuals from Italy, to stop any further processing of data belonging to Italian people, and to designate a representative in the EU. Pictures were analysed by the facial recognition algorithm created by Clearview AI to build up a gigantic database of biometric data and access to the same database was sold to law enforcement agencies. The company also extracts any associated metadata from the image: title of the image or webpage, geolocation, date of birth, source link, nationality, gender.

According to the Italian DPA, biometric and personal data were processed unlawfully without an appropriate legal basis, the company failed to adequately inform people of how their images were collected and analysed, and processed people’s data for purposes other than those for which they had been made available online. In fact, a line of argument of Clearview AI was to equate themself to Google Search for faces. However, the DPA stated that, by selling access to a database and a proprietary face matching algorithm intended for certain categories of customers, “Clearview has specific characteristics that differentiate it from a common search engine that does not process or enrich images on the web […] creates a database of image snapshots that are stored as present at the time of collection and not updated.”

In addition, the DPA highlights that “the company’s legitimate interest in free economic initiative cannot but be subordinate to the rights and freedoms of the persons concerned.”

At the moment Clearview has 30 days to communicate to the Italian DPA what measures they are adopting and up to 60 days to either pay the fine or appeal to a court.

This decision is an other step in the right direction to ban all sorts of biometric surveillance practices that, as higlighted by EDRi-led campaign Reclaim Your Face, have a huge impact on fundamental human rights.


Further reading:

Italian DPA decision on Clearview AI (in Italian): https://www.garanteprivacy.it/web/guest/home/docweb/-/docweb-display/docweb/9751362

Hermes Center press release on fine to Clearview AI: https://www.hermescenter.org/clearview-ai-ha-monitorato-i-cittadini-italiani-garante-privacy-illegale/

Challenge against Clearview AI in Europe: https://privacyinternational.org/legal-action/challenge-against-clearview-ai-europe

Reclaim Your Face impact in 2021

A sturdy coalition, research reports, investigations, coordination actions and gathering amazing political support at national and EU level. This was 2021 for the Reclaim Your Face coalition – a year that, despite happening in a pandemic – showed what the power of a united front looks like.


Forming a coalition in a strategic moment

In January 2021, a group of civil society organisations were meeting every 2 weeks to strategise and plan what has become one of the most politically–powerful campaigns: Reclaim Your Face.

Set on a mission from October 2020, the coalition of then 12 organisations came together to form the Reclaim Your Face coalition, aiming to ban biometric mass surveillance in Europe. Since then we welcomed dozens more organisations, which work on digital rights and civil liberties, workers’ rights, the rights of Roma and Sinti people, LGBTQ+ rights, media freedom and the protection of migrants and people on the move. We gathered activists, volunteers, technologists, lawyers, academics, policy-makers – all united in one common goal.

The launch of the campaign happened at a strategic moment when the EU began its work on a law proposal to regulate artificial intelligence (AI). The relevance and timing of the Reclaim Your Face campaign is unquestionable as AI techniques are at the centre of today’s biometric surveillance technologies such as facial recognition.

Raising awareness of the spread and harms of biometric mass surveillance

For the people in the Reclaim Your Face coalition, 2021 started with a strong focus on raising awareness about the harms associated with biometric mass surveillance. More, we showed this exploitative practice is a reality in many cities across Europe and not a dystopian fiction story.

Check out our video records.

Researching biometric mass surveillance

EDRi’s Brussels office and the leading organisations of the campaign coordinated research: mapping both technology deployments and legal frameworks that govern (or not) biometric mass surveillance practice in some EU countries.

Coordinating pandemic-proof actions

In 2021, we also coordinated online and offline actions that enabled every campaign supporter to act as part of a powerful collective. The pandemic put constraints on realising such actions, however, the creative hive mind behind the campaign made it happen!

The #PaperBagSociety stunt sparked curiosity and started discussions among curious minds as Reclaim Your Face activists wore paper bags on their heads in public spaces as a sign of protest. The #WalkTheTalk Twitter storm united activists across the Atlantic in calling on the EU Commissioner Vestager and the US Secretary Raimondo to not negotiate our rights in their trade discussions.

Politically, our success has been clear

Our European Citizens Initiative has been positioned as “perhaps the most politically powerful” of all to date. Thank you to the almost 65,000 EU citizens who have supported it so far!

Firstly, together we successfully set the agenda of the debate on AI. Not only were the words “ban” and “remote biometric identification” (a prominent technique that leads to biometric mass surveillance) included in the AI Act law proposal, but many EU and national affairs newspapers acknowledged the importance of the topic and reported heavily on it.

Secondly, we gathered support from several influential bodies that also called for a ban: EU’s top data protection regulators (the EDPS and EDPB), the Green Group in the EU Parliament, as well as Germany’s newly elected government, several national data protection authorities and UN officials. Our impact is also evident in the report Members of the EU Parliament adopted, calling for a ban on biometric mass surveillance by law enforcement.

Through our coalition, we successfully applied pressure on national governments that tried to sneak in laws that enabled biometric mass surveillance in Serbia and Portugal.  In Italy, Reclaim Your Face campaigners helped to catalyse a moratorium on facial recognition, and in Hamburg, data protection authorities agreed with us that the use of EU citizens’ face images by ClearviewAI is illegal.

Moving ahead in 2022, the Reclaim Your Face coalition is aiming to expand its reach, bringing together even more organisations fighting against biometric mass surveillance. We will train the many volunteers who have offered their support and reach a new level of political engagement.

Thank you for supporting us!

No biometric surveillance for Italian students during exams

In September 2021 the Italian Data Protection Authority (DPA) fined Luigi Bocconi University €200 000 for using Respondus, a proctoring software, without sufficiently informing students of the processing of their personal data and, among other violations, for processing their biometric data without a legal basis. Bocconi is a private University based in Milan and during the COVID-19 pandemic introduced Respondus tools to monitor students during remote exams. 


Respondus offers two different modules: Lockdown browser and Respondus Monitor. The former prevents a student from using their computer as usual, meaning that the person for example cannot open other programs. Respondus Monitor checks that the person in front of the screen is the one that should be taking the exam, in order to prevent someone else from replacing the student or passing notes. To do this, the software uses algorithms that analyse the biometric data of the person’s face in order to confirm their presence and it also records keystrokes, mouse movements and the duration of the exam. After processing the data, the software sends the professor a report showing the student’s image for identification purposes and alerts of any anomalies, with details on the reason for the alert. 

The University initially tried to walk back from what they stated in their own privacy policy, claiming that no biometric data was processed given that the only identification happening was the one concerning the initial picture taken by the software and used by an operator (in this case the professor) to confirm the identity of the student. Something that didn’t match the real functioning of the system. In fact, in their decision, the DPA says that Respondus declared that their software creates a biometric template to monitor the presence of the same person in front of the screen throughout the exam. For this reason, the “software performs a specific technical processing of a physical characteristic of the persons,” says the DPA and, currently, in Italy there is no legal provision expressly authorising the processing of biometric data for the purposes of verifying the regularity of exams. The DPA highlights also that, considering that the processing was carried out by the University for the purpose of issuing degrees with legal value and the specific imbalance in the position of students with respect to the University, consent does not constitute the legal basis of the processing nor can it be considered as freely given. 

In addition, the DPA considers the functionalities of the ‘Respondus Monitor’ component as a “partially automated processing operation for the analysis of the behaviour of the data subjects, in relation to the subsequent assessment by the teacher,” and this “gives rise to the ‘profiling’ of the students.”

This processing of personal data, according to the DPA, may have an impact on the emotional and psychological sphere of the persons concerned which “may also derive from the specific functionalities of the supervision system, such as, in this case, facial recognition and behavioural profiling, with possible repercussions on the accuracy of the anomalies detected by the algorithm and therefore, indirectly, also on the overall outcome of the test.” 

Laptop and book, both open

Bocconi is not the only Italian University using proctoring software. In June 2020 in Italy there were at least ten Universities using (or planning to use) similar tools such as Proctorio, ProctorExam, and Safe Exam Browser. This Authority’s decision would prohibit other Italian Universities from using software similar to Respondus that collect and process students’ biometric data.

Despite this push back on student monitoring, this decision also reminds us that biometric surveillance is increasingly expanding into every sphere of our lives and the only solution is to call for a ban on these technologies.

Contribution by: Laura Carrer, Research and Advocacy at Digital Rights Unit, Hermes Center & Riccardo Coluccini, Reclaim Your Face national campaign contributor.

People across Switzerland reclaim their faces and public spaces!

On 18 November, three of the organisations that have long championed the Reclaim Your Face campaign – Digitale Gesellschaft (CH), Algorithm Watch CH and Amnesty International (CH) – co-launched a brand new and exciting action in the fight to curtail the sinister rise of biometric mass surveillance practices across Europe!


Called ‘Gesichtserkennung stoppen’ (DE) / ‘Stop à la reconnaissance faciale’ (FR), this action calls on Swiss supporters to take a stand for human rights and oppose the expansion of facial recognition and related biometric mass surveillance in Switzerland.

The action, in the form of a petition, complements the long-running European Citizens’ Initiative (ECI) run by the 65+ organisations in the Reclaim Your Face campaign. However, because EU laws limit those who can sign an ECI to those people who hold EU citizenship, our Swiss supporters have sadly been unable to make their opposition to a biometric mass surveillance society clear. Luckily, not any more!

The organisers explain why action is needed in Switzerland:

We have the right to move freely in public places without anyone knowing what we are doing. But automatic facial recognition allows us to be identified in the street at any time. We want to prevent such mass surveillance. Take a stand against automated facial recognition in public places in Swiss cities! Sign our petition today.”

So what are you waiting for? If you live or work in Switzerland, let the government know that you want to be treated as a person, not a walking barcode:

And if you do have EU citizenship (even if you’re a dual national) then you can give your voice legal weight by signing the ECI to ban biometric mass surveillance

SUCCESS! New German government calls for European ban on biometric mass surveillance

The newly-agreed German government coalition has called for a Europe-wide ban on public facial recognition and other biometric surveillance. This echoes the core demands of the Reclaim Your Face campaign which EDRi has co-led since 2020, through which over 65 civil society groups ask the EU and their national governments to outlaw biometric data mass surveillance.

Read More

Portugal: Proposed law tries to sneak in biometric mass surveillance.

Whilst the European Parliament has been fighting bravely for the rights of everyone in the EU to exist freely and with dignity in publicly accessible spaces, the government of Portugal is attempting to push their country in the opposite direction: one of digital authoritarianism.

The Portuguese lead organisation in the Reclaim Your Face coalition D3 (Defesa Dos Direitos Digitais) are raising awareness of how the Portuguese government’s new proposed video surveillance and facial recognition law amounts to illiberal biometric mass surveillance. Why? Ministers are trying to rush the law through the Parliament, endangering the very foundations of democracy on which the Republic of Portugal rests.

Eerily reminiscent of the failed attempts by the Serbian government just two months ago to rush in a biometric mass surveillance law, Portugal now asked its Parliament to approve a law in a shocking absence of democratic scrutiny. Just two weeks before the national Assembly will be dissolved, the government wants Parliamentarians to quickly approve a law, without public consultation or evidence. The law would enable and encourage widespread biometric mass surveillance – even though we have repeatedly shown just how harmful these practices are.

Reclaim Your Face lead organisation EDRi sent a letter to representatives of Portugal’s main political parties, supporting D3’s fight against biometric mass surveillance practices that treat each and every person as a potential criminal. Together, we urged politicians to reject this dystopian law.

Read EDRi’s letter below.

You can also read D3’s thread (in Portuguese) explaining further why this proposed law is such a problem.



Re: Serious fundamental rights concerns about proposed Portuguese video surveillance Law 111/XIV/2


I am writing to you and your colleagues on behalf of European Digital Rights (EDRi), a network of 45 digital human rights groups from across Europe, including D3 – Defesa dos Direitos Digitais, to urge you to oppose this proposed law.

We want to express our deep concern about the Proposed Law 111/XIV/2 on the use of video surveillance by security forces and services. Despite providing no evidence of effectiveness, necessity or proportionality of these measures, the proposal puts forward sweeping measures which would permit the constant video and biometric mass surveillance of each and every person.

There are many reasons why this proposal is likely to be incompatible with the essence of Portugal’s constitutional obligations to ensure that restrictions on fundamental rights are necessary and proportionate (article 18/2); with Portugal’s obligations under the Charter of Fundamental Rights of the European Union (including but not limited to articles 1, 7, 8, 11, 12, 20, 21, 41, 47, 48 and 49); and the European Convention on Human Rights (ECHR).

The proposed law 111/XIV/2:

1.Removes current legal safeguards limiting the use of invasive video surveillance, such that the permanent and nearly omnipresent use of these systems in publicly accessible spaces may be permitted;

2.Permits video surveillance by aerial drones without limits, further creating possibilities for the pervasive public surveillance; and

3.Establishes that these vast video surveillance networks may be combined with facial recognition and other AI-based systems in public spaces. Such practices enable the omnipotent tracking of individuals, and can thus unduly interfere with a wide range of people’s rights including to: privacy and data protection; as well as to express, associate and assemble freely; to have respect for their rights to equality, non-discrimination and dignity; as well as rights to the presumption of innocence and other due process rights.


Furthermore, the proposal recklessly removes existing safeguards:

Law 111/XIV/2 proposes to withdraw vital powers from the national data protection authority
, the Comissão Nacional de Protecção de Dados (CNPD). This means that not only has the government proposed measures which contradict Portugal’s data protection obligations, but that the very authorities designated to protect people’s from undue violations of their rights will be deliberately prevented from being able to carry out their vital public duties. The CNPD have called this proposal a “gross violation of the principle of proportionality” and have emphasised that it is likely incompatible with the Portuguese Constitution.

The proposal enables biometric mass surveillance practices:

The combined effect of these measures would be highly likely to unduly restrict the rights and freedoms of large parts of the Portuguese population and to constitute unjustified biometric mass surveillance practices. Such measures treat each person as a potential suspect, and they obscure the possibility of targeted use, as passers-by are an inherent feature of public spaces. Over 63.000 EU citizens have already objected to these practices via the Reclaim Your Face campaign, including close to 750 Portuguese nationals.
The Italian Data Protection Authority has further confirmed that uses of facial recognition and other biometric identification in public spaces constitutes mass surveillance, even when authorities are searching for specific individuals on a watch-list. This is because, as the European Data Protection Supervisor (EDPS) and Board (EDPB) have emphasised, the personal data and privacy of anyone passing through that space is unduly infringed upon by such surveillance.

Another particular risk arises from the fact that the proposal requires the processing of especially sensitive data. People’s biometric data, such as their faces, are central to their personal identity and sometimes their protected characteristics. Their processing can therefore infringe on rights to dignity, equality and non-discrimination, autonomy and self-determination.

The proposal is at odds with the European Parliament and the United Nations:

The proposed law stands in direct contradiction to the position of the European Parliament, which voted in October 2021 to adopt the ‘AI and criminal law’ report. This official report call to ban biometric mass surveillance, including of the kind that is being proposed in law 111/XIV/2. Other EU opposition to such practices includes the proposed ban on real-time remote biometric identification (RBI) by law enforcement in the EU’s Artificial Intelligence Act, and a call from the EDPS and EDPB to implement a “general ban any use of AI for an automated recognition of human features in publicly accessible space.”

The need to prohibit, rather than legalise, such practices has also been confirmed by the UN High Commissioner for Human Rights, who warned that it “dramatically increases the ability of State authorities to systematically identity and track individuals in public spaces, undermining the ability of people to go about their lives unobserved” and should therefore be limited or banned.

The proposal undermines the essence of a democratic society:

Mass surveillance is not just bad for individuals, but also for communities. The landmark Census judgement of the German Constitutional Court articulated the threats not only to people’s political rights and civil rights, but also to democracy and “the common good, because self-determination [which is harmed by mass surveillance] is an essential prerequisite for a free and democratic society that is based on the capacity and solidarity of its citizens.”

European and international human rights groups have raised the severe harms of biometric mass surveillance. Constant, invasive surveillance disincentivises people from protesting; suppresses anti-corruption efforts by making it harder for sources to blow the whistle anonymously; and has a general chilling effect on people’s rights and freedoms. Biometric mass surveillance systems have been used across Europe and the world to spy on groups including human rights defenders, LGBT+ communities and people going to church.

Lastly, the hurried manner in which this proposal has been brought forward is grave cause for concern. With the upcoming dissolution of the Portuguese Parliamentary Assembly, the government aims to push through this rights-violating proposal in a rushed manner and without public consultation. This prevents proper democratic scrutiny of the proposal and will undermine people’s trust in the legislative process.

We urge you to consider the rights and freedoms of the people of Portugal and your obligations under constitutional, EU and international law, to reject the proposed video surveillance law 111/XIV/2. We are at your disposal should you wish to discuss any of the above.

Yours sincerely, Diego Naranjo
Head of Policy, EDRi

Building on your support, we can to show EU leaders that we do not support the use of technologies that turn our publicly accessible spaces into a permanent police line-up.


Help us grow stronger and sign our citizens’ initiative to ban biometric mass surveillance!

Facebook deleting facial recognition: Five reasons to take it with a pinch of salt

On Nov.2 , the company formerly known as Facebook announced that by the end of the year, it will delete the entire database of facial templates used for automated photo tagging on the Facebook app. Yes, that Facebook – the notorious social media platform most recently in the news for a major whistleblowing scandal and a subsequent change of company name from Facebook to “Meta”.

Early reactions praised Facebook for this bold and surprising move. So, has Christmas come early in the digital rights world? Well, not so fast.


This move seems on the surface to be a good thing because it chips away at the group’s power and control over face data from around 13% of the world’s population. However, the reality is that things are not as rosy as Facebook would like you to think.

The latest Facebook announcement reveals exactly why voluntary “goodwill” self-regulation is superficial, and why we need strong EU rules in future legislation like the AI Act – as the Reclaim Your Face campaign demands. 

Here’s five pinches of salt for your reality-check on Facebook deleting facial recognition:

1. The Facebook app will delete a database containing the face templates (“faceprints”) of over a billion people, which underpin the facial recognition system used to flag when people’s faces appear in photos and videos, for example for tagging purposes. But what about the underlying algorithm (the eerily named ‘DeepFace’) that powers this facial recognition? According to the New York Times, Facebook stated that DeepFace algorithms will be retained, and the company is very much open to using them in future products.

2. This means that whenever it suits their commercial interests, Facebook can flick the switch to turn their vast facial recognition capacity back on.

3. The Meta group’s initial statement does not say whether or not the database is the group’s (or even the app’s) only database used for identifying people, or whether they have others. As Gizmodo points out, the commitment doesn’t affect other Meta companies, such as Instagram, which will continue to use people’s biometric data.

4. Facebook has had a lot of bad press recently. So is this a convenient distraction to get praise from their long-time critics, the privacy community? It is probably also no coincidence that, as The Verge reports, this move comes after Facebook had to pay well over half a billion dollars in the US because the Face Recognition feature had been violating people’s privacy.

5. Meta’s press release outlines a desire by the company to do more with face authentication. People’s biometric data is always sensitive, and we increasingly see how authentication can pose serious risks to people’s privacy and equality rights as well as to their access to economic and public services. Given Facebook’s sprawling plans for a “metaverse”, their privacy-invading RayBan glasses, and their track record of massive and systemic privacy intrusions, we cannot trust that they will only use face data in benign and rights-respecting ways.

At its core, the Facebook app’s business model is based on exploiting your data. Far from being an all-out win, this move to delete their face recognition database shows more than ever why we simply cannot rely on the apparent ‘goodwill’ of companies in the place of rigorous regulation. When companies self-regulate, they also have the power to de-regulate as and when they wish.

As Amnesty International’s Dr. Matt Mahmoudi points out, the truly good news in this story is that the international pressure against facial recognition – thanks to movements like Reclaim Your Face and Ban The Scan – is making companies sweat. Mass facial recognition is becoming less socially acceptable as people become more and more aware of its inherent dangers. Much like IBM’s vague 2019 commitment to end general-purpose facial recognition and Amazon’s recently-extended self-imposed pause on the Rekognition facial recognition for law enforcement, it is naive at best to expect that companies will sufficiently rein in their harmful uses of facial recognition and other biometric data.

That’s why Reclaim Your Face continues to fight for a world free from pervasive facial recognition and other forms of biometric mass surveillance.

All EU citizens can sign our official EU initiative which calls to ban these practices and to hold companies like Facebook to account

Serbia withdraws a proposed Biometric Surveillance Bill following national and international pressure

On 23 September, the Serbian Minister of Interior Aleksandar Vulin announced that the Draft Law on Internal Affairs, which contained provisions for legalising a massive biometric video surveillance system, was pulled from the further procedure. This turn of events presents a key victory in SHARE Foundation’s two and a half year-long battle against smart cameras in Belgrade, which were installed by the Ministry of Interior and supplied by Chinese tech giant Huawei.

During public consultations, SHARE Foundation sent comments on the Draft Law, which put Serbia in danger of becoming the first country in Europe with permanent indiscriminate surveillance of citizens in public spaces. EDRi’s Brussels office also warned the Serbian government of the dangers to privacy and other human rights if such a law was passed.

Gathering and inspiring the community

This success would not have been possible without SHARE’s community and international partners, such as the EDRi office and other member organisations, as well as related initiatives such as Reclaim Your Face and Ban Biometric Surveillance. Our battle was also supported by Members of the European Parliament which put international pressure on the Serbian government.

Huge awareness raising efforts were needed to highlight the importance of this issue, especially in a society with low privacy priorities such as Serbia. Through SHARE’s initiative called “Thousands of Cameras” (#hiljadekamera), we gathered a community of experts with various backgrounds (law, policy, tech, art, activism) as well as citizens worried about the implications of biometric surveillance in our streets and public spaces. Actions like “hunt for cameras”, where we called upon citizens to map smart cameras in their neighbourhoods, an online petition against biometric surveillance and a crowdfunding campaign for “Thousands of Cameras” have all shown that the fight against biometric surveillance can mobilise people effectively. The datathon organised to make a one-stop platform for the “Thousands of Cameras” initiative was a milestone that enabled us to keep pushing against this dangerously intrusive technology.

Lessons learned 

One of the key preconditions for success was the new Law on Personal Data Protection, modelled after the GDPR, which requires a Data Protection Impact Assessment (DPIA) to be conducted before intrusive data processing mechanisms are put in place and approved by the Commissioner for Personal Data Protection. This led to the Commissioner denying the DPIAs conducted by the Ministry of Interior on two occasions, citing that such a system lacked an adequate legal basis.

International standards and opinions on biometric surveillance provided by bodies such as the UN, Council of Europe and European Union institutions all provided valid points on why such technologies should be banned in any society aspiring towards democracy and the full respect of human rights.

However, SHARE also found that a multidisciplinary approach to the topic was necessary. Solely a legal angle is inadequate to argue against such a controversial issue. It needs to be tackled from different perspectives, such as human rights concerns, technological aspects (techno-solutionism) and by focusing on the impact on citizens’ everyday lives, particularly in vulnerable communities.

Getting the message across via the media was also instrumental. In the past couple of years over 300 news articles have been written about biometric surveillance in Belgrade, in both domestic and international media. In that respect, it is of utmost importance to forging partnerships with media and journalists interested in the topic, as they can immensely contribute to spreading awareness and mobilising people.


Read More



ReclaimYourFace is a movement led by civil society organisations across Europe:

Access Now ARTICLE19 Bits of Freedom CCC Defesa dos Direitos Digitais (D3) Digitalcourage Digitale Gesellschaft CH Digitale Gesellschaft DE Državljan D EDRi Electronic Frontier Finland epicenter.works Hermes Center for Transparency and Digital Human Rights Homo Digitalis IT-Political Association of Denmark IuRe La Quadrature du Net Liberties Metamorphosis Foundation Panoptykon Foundation Privacy International SHARE Foundation
In collaboration with our campaign partners:

AlgorithmWatch AlgorithmWatch/CH All Out Amnesty International Anna Elbe Aquilenet Associazione Luca Coscioni Ban Facial Recognition Europe Big Brother Watch Certi Diritti Chaos Computer Club Lëtzebuerg (C3L) CILD D64 Danes je nov dan Datapanik Digitale Freiheit DPO Innovation Electronic Frontier Norway European Center for Not-for-profit Law (ECNL) European Digital Society Eumans Football Supporters Europe Fundación Secretariado Gitano (FSG) Forum InformatikerInnen für Frieden und gesellschaftliche Verantwortung Germanwatch German acm chapter Gesellschaft Fur Informatik (German Informatics Society) GONG Hellenic Association of Data Protection and Privacy Hellenic League for Human Rights info.nodes irish council for civil liberties JEF, Young European Federalists Kameras Stoppen Ligue des droits de L'Homme (FR) Ligue des Droits Humains (BE) LOAD e.V. Ministry of Privacy Privacy first logo Privacy Lx Privacy Network Projetto Winston Smith Reporters United Saplinq Science for Democracy Selbstbestimmt.Digital STRALI Stop Wapenhandel The Good Lobby Italia UNI-Europa Unsurv Vrijbit Wikimedia FR Xnet


Reclaim Your Face is also supported by:

Jusos Piratenpartei DE Pirátská Strana

MEP Patrick Breyer, Germany, Greens/EFA
MEP Marcel Kolaja, Czechia, Greens/EFA
MEP Anne-Sophie Pelletier, France, The Left
MEP Kateřina Konečná, Czechia, The Left



Should your organisation be here, too?
Here's how you can get involved.
If you're an individual rather than an organisation, or your organisation type isn't covered in the partnering document, please get in touch with us directly.