Since 2019, over 600 law enforcement agencies across the United States have started using a groundbreaking facial recognition app designed by Clearview AI, a tech start-up which now plans to market its technology also in Europe. While the Clearview app is an expression of the wider phenomenon of the repurposing of privately held data in the law enforcement context, its use in criminal proceedings is likely to encroach on individuals’ rights in unprecedented ways. Indeed, the Clearview app goes far beyond traditional facial recognition tools. If these have been historically limited to matching government-stored images, Clearview now combines its technology with a database of over three billion images published on the Internet. Against this background, this article will review the use of this new investigative tool in light of the European Union (EU) legal framework on privacy and data protection. The proposed assessment will proceed as follows. Firstly, it will briefly assess the lawfulness of Clearview AI’s data scraping practices under the General Data Protection Regulation. Secondly, it will discuss the transfer of scraped data from the company to EU law enforcement agencies under the regime of the Directive 2016/680/EU (the Directive). Finally, it will analyse the compliance of the Clearview app with art 10 of the Police Directive, which lays down the criteria for lawful processing of biometric data. More specifically, this last analysis will focus on the strict necessity test, as defined in the Charter of Fundamental Rights of the European Union and the European Convention on Human Rights. Following this assessment, it will be argued that the Clearview app’s use in criminal proceedings is highly problematic in light of the EU legislation on privacy and data protection.
Police departments are increasingly relying on surveillance technologies to tackle public security issues in smart cities. Automated facial recognition is deployed in public spaces for real-time identification of suspects and warranted individuals. In some cases, law enforcement is going even further by exploiting also emotion recognition technologies. In preventive operations indeed, emotion facial recognition (EFR) is being used to infer individuals' inner affective states from traits like facial muscle movements. In this way, law enforcement aims to obtain insightful hints on unknown persons acting suspiciously in public or strategic venues (e.g. train stations, airports). While the employment of such tools still seems to be relegated to dystopian scenarios, it is already a reality in some parts of the world. Hence, there emerges a need to explore their compatibility with the European human rights framework. The Chapter undertakes this task and examines whether and how EFR can be considered compliant with the rights to privacy and data protection, the freedom of thought and the presumption of innocence.Isadora Neroni Rezende is a PhD Candidate in Law, Science and Technology -Rights of the Internet of Everything.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.