News




Workplace, public space: workers organising in the age of facial recognition

By Reclaim Your Face campaign coordinator European Digital Rights (EDRi) and campaign partner UNI Europa.
This article was originally published in Social Europe.


A spectre is haunting Europe—the spectre of workers’ surveillance. Under the cover of ‘emergency’ legislation against ‘terrorism’ or to contain the Covid-19 pandemic, surveillance of working people is continually being expanded and normalised, in the workplace and on our streets. This takes various forms, with various degrees of invasiveness. It can be Amazon spying on its workers in private Facebook groups or using Covid-19 health-tracking technology to keep tabs on at least 340,000 workers. And it can be facial recognition of employees working remotely, monitored by smaller employers.

Workers’ surveillance can also take the shape more broadly of facial-recognition systems all across Europe’s publicly accessible spaces. This can—among many other harms—suppress workers’ organising. Imposing an omnipresent sense of being watched, surveillance has a clear chilling effect on workers’ readiness to exercise their right to freedom of assembly.

Short-term appeal

Both corporations and governments may see short-term appeal in this wave of expansion of surveillance. While flashy gadgets help paint a picture of a techno-utopia, a key driver is the quest for control. 

The dominant positions consolidated by data corporations such as Amazon are only matched by their growing unaccountability. We cannot accept their logic of treating people as a threat—let alone see that adopted by governments. That would accelerate the erosion of trust we are seeing in our societies. 

The deployment of facial recognition (or other biometric surveillance technologies) by governments or companies is an immanent threat to workplace democracy. Amid protest outside the headquarters of a company, CCTV cameras could be repurposed for union-busting. Secret algorithms, lacking a scientific basis, could be applied to mark and identify ‘troublemakers’. Faced with marching in the streets, police could use footage and software quickly to identify and target leaders—as well as people supporting or even reporting on the demonstration.

Hard-won rights and freedoms are not negotiable. Ramping up surveillance of individuals is not the way to address collective grievances. Clear boundaries must be set and the collective say of relevant groups of the population over decisions affecting them must be strengthened.

Mass surveillance, at work and beyond, suffocates legitimate social discourse without assuaging the need for it. The history of industrial relations shows time and again that brushing issues under the carpet inevitably results in them re-emerging down the line—often with much more serious consequences.

‘Security-industrial complex’

The documented partnership between private and public surveillance actors in the European Union has been described by the longstanding civil-liberties organisation Statewatch as the ‘security-industrial complex’. Together with the threats to workers’ organising, this phenomenon shows why the EU’s new artificial-intelligence regulation must go beyond what the European Commission has proposed—and ban biometric mass surveillance, by both private and public actors.

Furthermore, the use of AI in the workplace—in recruitment, appraisals and so on—as with all workplace-related rules must be subject to collective-bargaining agreements. The commission’s draft would only require self-evaluation by the companies selling these technologies, leaving the fox to guard the henhouse.

Workers’ rights will be won, as with any other human rights, by collective struggles: collective bargaining or striking at the workplace and protesting on the streets will remain the main tools. But in an increasingly digitalised society, where emails, conversations, faces and bodies can be put under constant surveillance, many will fear for their job security and may choose to stay put rather than become visible.

This is why banning biometric mass surveillance in public spaces and strongly legislating to control harmful AI technologies in the workplace are key to advancing workers’ rights.

Karlsruhe win against biometric mass surveillance in Germany

By ReclaimYourFace campaign lead organisation Chaos Computer Club (CCC)


In November 2020, reporters at Netzpolitik.org revealed that the German city of Karlsruhe wanted to establish a smart video surveillance system in the city centre. The plan involved an AI system that would analyse the behaviour of passers-by and automatically identify conspicuous behaviour. The biometric mass surveillance system was presented by authorities as “data protection compliant video surveillance”. After the intervention of EDRi-member CCC (Chaos Computer Club’s chapter Karlsruhe, also known as Entropia) the project was buried in May 2021. Such a success adds to previous wins by EDRi members involved in the ReclaimYourFace campaign that calls for the EU to ban biometric mass surveillance across all EU countries.

Read More

Can a COVID-19 face mask protect you from facial recognition technology too?

Andreea Belu, Campaigns and Communications Manager at EDRi
Harmit Kambo, Campaigns Director at Privacy International

Our relationship with ‘public space’ is being redefined, not just by a global pandemic, but also by a new era of biometric surveillance technologies. Biometric mass surveillance enables companies and authorities to track us based on unique personal data and identify us whenever, wherever we go.

The increasing use of facial recognition and other biometric surveillance technologies – on our streets, in train stations, at protests, at sports matches and even in our global ‘town square’, Facebook – means that our freedom to be anonymous in public spaces, our freedom to just be, really does face an existential threat.

Mass facial recognition risks our collective futures and shapes us into fear-driven societies of suspicion.

This got folks at EDRi and Privacy International brainstorming. Could the masks that we now wear to protect each other from Coronavirus also protect our anonymity, preventing the latest mass facial recognition systems from identifying us?

A COVID-19 facemask that tricks facial recognition? Let’s try.

At first, it seemed an easy task. After all, a face mask covers most of your face already.

However, we already know that facial recognition systems are remarkably sophisticated, and can identify people (or misidentify people) with relatively little ‘face data’. In fact, while we were working on this project, new revelations showed: a normal face mask doesn’t help you ‘evade’ facial recognition.

Given that anti-surveillance clothing and make-up has had some success in confusing facial recognition systems in the past, we thought that it would be worth experimenting with a similar approach with a face mask pattern.

In looking for the right pattern design, we sought the advice of Tijmen Schep – researchers / artists with experience in tricking facial recognition algorithms.

More difficult than it seems, the tech is too creepy.

Mask design: Ed Grace

And that’s where it got complicated. Here are just some of the challenges we encountered in our journey:

  • Every facial recognition system works in a different way. This means that a design that might ‘fool’ one system, might not fool another.
  • Most facial recognition algorithms as opaque black boxes. The lack of transparency in the use of facial recognition makes it difficult to understand what is being used, how and where. As Adam put it “There is really no such thing as a mask that blocks ‘face recognition’ but rather a mask that blocks, for example, a ResNet-50 CNN model that was trained on VGG Face2 dataset.”
  • Masks have a small surface area. A pattern that might work on an item like a t-shirt might not be effective in confusing FR when used on a small item like a mask.
  • Stopping face detection, let alone recognition. We tried to create a pattern that would confuse the tech in such a way that it wouldn’t even perceive a face, let alone be able to recognise it. This might be through using a pattern made up of simple black and white geometric forms. However, face detection technologies are so good these days, we soon realised that this approach is unlikely to fool them.
  • Face detected. Stop recognition? So then we then considered an alternative approach. Instead of geomertic forms, what about using warped face-like features instead, perhaps using a mixture of natural and ‘unnatural’ colours, to mislead or at least lower the confidence of a match score? We were optimistic about this, but then we got stuck in the black box problem : we could only know of its effectiveness, by testing it across a vast range of currently opaque facial recognition technologies, which are constantly being updated and increasingly sophisticated.

What did we learn?

Here’s what we learnt from our exploration into stopping intrusive facial recognition technologies from detecting or recognizing our face, when wearing a face mask.

  • We do not know whether it is possible to design a face mask that can protect your anonymity against facial recognition technologies.
  • What we know for sure is that it is virtually impossible to know whether even our most ‘educated guess’ would be totally effective, partially effective, or totally ineffective against all, some, or indeed any facial recognition systems.
  • The small acts of resistance we might take against these powerful technologies are not a substitute for for structural, long-lasting solutions. Therefore..
  • …People should not carry the burden of masking themselves to maintain their anonymity. Instead, governments must ensure unlawful biometric mass surveillance such as mass facial recognition is banned in our publicly accessible spaces.

Why did we produce the mask anyway?

The mask is a symbol of resistance against the growing use of mass facial recognition. Wearing this mask means wearing the story of resistance. Wearing this mask means standing up to power, ready for collective action, ready to #ReclaimYourFace.

Since October 2020, a growing coalition of now 61 civil society organisations have mobilised across Europe with one message: “The EU must ban biometric mass surveillance!”

The solutions we ultimately need are not clever facial recognition-thwarting face masks. Instead, we need politicians and decision – makers who do their job right. We need strong regulation against the use of biometric mass surveillance – across the EU and beyond.

Join the movement, make your voice heard today.

Italy wins: DPA blocks facial recognition system and MP proposes moratorium

By Laura Carrer (research & advocacy – Italian campaign lead organisation Hermes Center, and Riccardo Coluccini – Italian campaign contributor)

On Friday 16 April, the Italian Data Protection Authority (DPA) rejected the SARI Real Time facial recognition system acquired by the Police. The DPA argues the system lacks a legal basis and, as designed, it would implement a form of mass surveillance. Two days earlier, on 14 April, a member of the Italian Parliament, Filippo Sensi, proposed a moratorium on the use of video surveillance tools that use Facial Recognition.

Read More

61 MEPs urge the EU to ban biometric mass surveillance!

This week, 61 Members of the European Parliament (MEPs) from across the political spectrum, including representatives of the Greens, the Left, Renew Europe, the Socialists & Democrats (S&D) and the Identity & Democracy (ID) groups raised their voices in agreement with the Reclaim Your Face campaign. They wrote not one, but two powerful letters, which made their position clear to the European Commission (EC): discriminatory biometric mass surveillance practices can have no place in our societies! The letters come only a few days before the EC’s publication of a new and long-awaited EU law on artificial intelligence.

Read More

Roma & Sinti rights, resistance and facial recognition: In Conversation

April 19, 2021, 2:00 pm – 3:00 pm CEST

This webinar, run by the Reclaim Your Face campaign, will feature a conversation with activists/experts Roxanna-Lorraine Witt and Benjamin Ignac. The topic of the webinar will be to explore the connections between Sinti and Roma rights and digital technology and data, in particular the rise of facial recognition. The webinar will likely be of interest to engaged in EU policy, human rights and digital rights, social justice, anti-discrimination or activism.

Read More

Letter to EU Commissioner for Justice

On 1 April, a coalition of 51 human rights, digital rights and social justice organisations sent a letter to European Commissioner for Justice, Didier Reynders, calling on the Commissioner to prohibit uses of biometrics that enable mass surveillance or other dangerous and harmful uses of AI. The letter comes ahead of the long-awaited proposal for new EU laws on artificial intelligence.

Read More

Evidence shows why we need a law against biometric mass surveillance

You often hear:

“Facial recognition of whole populations? But that’s just in China. We’re democratic in the EU. It’ll never happen to us.”

Unfortunately, it is already happening. Read below a summary of the extensive evidence we’ve compiled, documenting the rapid spread of biometric mass surveillance in EU countries. The (only) good thing? We can still stop it.

Read More

How he reclaimed his face from ClearviewAI

The Hamburg Data Protection Authority deemed Clearview AI’s biometric photo database illegal in the EU as a result of a complaint Matthias Marx, a member of the Chaos Computer Club (an EDRi member) filed.

By ReclaimYourFace campaign lead organisation Chaos Computer Club (CCC)
Originally published by noyb here.

In January 2020, two days after the New York Times revealed the existence of the face search engine Clearview AI, Matthias Marx, a member of the Chaos Computer Club (an EDRi member), sent a data subject access request to Clearview AI. He was surprised to learn that the company was processing his biometric data and lodged a complaint to the Hamburg data protection authority (DPA). As a result, the Hamburg DPA has now deemed Clearview AI’s biometric photo database illegal in the EU.

Chronological Review

In order to facilitate the data subject access request, Matthias shared a photo of his face. To confirm Matthias’ identity to guard against fraudulent access requests, Clearview AI additionally requested a government issued ID. Although Matthias ignored the request, Clearview AI sent him search results based on the photo he provided and confirmed deletion of the search photo in February 2020.

Matthias then electronically submitted a complaint to the Hamburg DPA that Clearview AI was processing his biometric data without consent. The DPA first rejected the complaint, arguing that the GDPR is not applicable. After further submissions, the DPA then eventually launched preliminary investigations. At the same time, noyb offered their support.

In May 2020, Clearview AI sent another, unsolicited, response to Matthias’ request and included new search results. Apparently, Clearview AI had not deleted the search photo as promised. While Clearview AI’s first answer only showed two photos of Matthias, this time it also contained eight photos of other people.

In August 2020, the Hamburg DPA ordered Clearview AI to answer a set of 17 questions under threat of penalties. Clearview AI replied in September 2020. In January 2021, Hamburg DPA initiated administrative proceedings against Clearview AI.

Background

The decision acknowledges the territorial scope of the GDPR, which is triggered for entities outside the EU if they monitor the behaviour of data subjects in the EU. Clearview AI had argued against the applicability of the GDPR, saying that they do not monitor the behaviour of individuals but provide only a “snapshot of some photos available on the internet”.

The Hamburg DPA discarded this argument for two reasons. For one, Clearview AI’s results include information that stretches over a period of time. For another, Clearview AI’s database links photos with their source and associated data. As such, it records information in a targeted manner – the definition of monitoring. Moreover, the Hamburg DPA noted that the subsequent use of collected personal data for profiling purposes, as happens with Clearview AI’s results, can be seen as a strong indicator for the existence of monitoring.

Taking into account subsequent use is important because it underlines that downstream processing by other entities can, to a certain extent, be used to classify the nature of the upstream processing. In other words, entities cannot fully launder their data processing by handing off the dirty work downstream.

Despite clearly stating that Clearview AI lacked a legal basis for its biometric profile, the Hamburg DPA unfortunately only ordered the deletion of the complainant’s biometric profile – it neither ordered the deletion of the complainant’s photos already collected, nor did it issue an EU-wide ban on Clearview AI’s processing. noyb had submitted arguments on why the Hamburg DPA could issue an EU-wide ban against Clearview.

In conclusion, this decision is only a first step. Further litigation is necessary. While Europeans now have precedent to rely on, we need decisions that also declare the harvesting of photos for totally incompatible purposes to their initial publication illegal.



ReclaimYourFace is a movement led by civil society organisations across Europe:

Access Now ARTICLE19 Bits of Freedom CCC Defesa dos Direitos Digitais (D3) Digitalcourage Digitale Gesellschaft CH Digitale Gesellschaft DE Državljan D EDRi Electronic Frontier Finland epicenter.works Hermes Center for Transparency and Digital Human Rights Homo Digitalis IT-Political Association of Denmark IuRe La Quadrature du Net Liberties Metamorphosis Foundation Panoptykon Foundation Privacy International SHARE Foundation
In collaboration with our campaign partners:

AlgorithmWatch AlgorithmWatch/CH All Out Amnesty International Anna Elbe Aquilenet Associazione Luca Coscioni Ban Facial Recognition Europe Big Brother Watch Certi Diritti Chaos Computer Club Lëtzebuerg (C3L) CILD D64 Danes je nov dan Datapanik Digitale Freiheit DPO Innovation Electronic Frontier Norway European Center for Not-for-profit Law (ECNL) European Digital Society Eumans Football Supporters Europe Fundación Secretariado Gitano (FSG) Forum InformatikerInnen für Frieden und gesellschaftliche Verantwortung Germanwatch German acm chapter Gesellschaft Fur Informatik (German Informatics Society) GONG Hellenic Association of Data Protection and Privacy Hellenic League for Human Rights info.nodes irish council for civil liberties JEF, Young European Federalists Kameras Stoppen Ligue des droits de L'Homme (FR) Ligue des Droits Humains (BE) LOAD e.V. Ministry of Privacy Privacy first logo Privacy Lx Privacy Network Projetto Winston Smith Reporters United Saplinq Science for Democracy Selbstbestimmt.Digital STRALI Stop Wapenhandel The Good Lobby Italia UNI-Europa Unsurv Vrijbit Wikimedia FR Xnet


Reclaim Your Face is also supported by:

Jusos Piratenpartei DE Pirátská Strana

MEP Patrick Breyer, Germany, Greens/EFA
MEP Marcel Kolaja, Czechia, Greens/EFA
MEP Anne-Sophie Pelletier, France, The Left
MEP Kateřina Konečná, Czechia, The Left



Should your organisation be here, too?
Here's how you can get involved.
If you're an individual rather than an organisation, or your organisation type isn't covered in the partnering document, please get in touch with us directly.