The rapid development of facial recognition technologies (FRT) has led to complex ethical choices in terms of balancing individual privacy rights versus delivering societal safety. Within this space, increasingly commonplace use of these technologies by law enforcement agencies has presented a particular lens for probing this complex landscape, its application, and the acceptable extent of citizen surveillance. This analysis focuses on the regulatory contexts and recent case law in the United States (USA), United Kingdom (UK), and European Union (EU) in terms of the use and misuse of FRT by law enforcement agencies. In the case of the USA, it is one of the main global regions in which the technology is being rapidly evolved, and yet, it has a patchwork of legislation with less emphasis on data protection and privacy. Within the context of the EU and the UK, there has been a critical focus on the development of accountability requirements particularly when considered in the context of the EU’s General Data Protection Regulation (GDPR) and the legal focus on Privacy by Design (PbD). However, globally, there is no standardised human rights framework and regulatory requirements that can be easily applied to FRT rollout. This article contains a discursive discussion considering the complexity of the ethical and regulatory dimensions at play in these spaces including considering data protection and human rights frameworks. It concludes that data protection impact assessments (DPIA) and human rights impact assessments together with greater transparency, regulation, audit and explanation of FRT use, and application in individual contexts would improve FRT deployments. In addition, it sets out ten critical questions which it suggests need to be answered for the successful development and deployment of FRT and AI more broadly. It is suggested that these should be answered by lawmakers, policy makers, AI developers, and adopters.
The publication of the UK’s National Artificial Intelligence (AI) Strategy represents a step-change in the national industrial, policy, regulatory, and geo-strategic agenda. Although there is a multiplicity of threads to explore this text can be read primarily as a ‘signalling’ document. Indeed, we read the National AI Strategy as a vision for innovation and opportunity, underpinned by a trust framework that has innovation and opportunity at the forefront. We provide an overview of the structure of the document and offer an emphasised commentary on various standouts. Our main takeaways are: Innovation First: a clear signal is that innovation is at the forefront of UK’s data priorities. Alternative Ecosystem of Trust: the UK’s regulatory-market norms becoming the preferred ecosystem is dependent upon the regulatory system and delivery frameworks required. Defence, Security and Risk: security and risk are discussed in terms of utilisation of AI and governance. Revision of Data Protection: the signal is that the UK is indeed seeking to position itself as less stringent regarding data protection and necessary documentation. EU Disalignment—Atlanticism?: questions are raised regarding a step back in terms of data protection rights. We conclude with further notes on data flow continuity, the feasibility of a sector approach to regulation, legal liability, and the lack of a method of engagement for stakeholders. Whilst the strategy sends important signals for innovation, achieving ethical innovation is a harder challenge and will require a carefully evolved framework built with appropriate expertise.
With its proposed EU AI Act, the EU is aspiring to lead the world in admiral AI regulation (April 2021). In this brief, we summarise and comment on the ‘Presidency compromise text’, which is a revised version of the proposed act reflecting the consultation and deliberation by member states and actors (November 2021). The compromise text echoes the sentiment of the original text, much of which remains largely unchanged. However, there are important shifts and some significant changes. Our main comments focus on exemptions to the act with respect to national security; changes that seek to further protect research, development and innovation; and the attempt to clarify the draft legislation’s stance on algorithmic manipulation. Our target readership for this paper is those who are interested in tracking the evolution of the proposed EU AI act, such as policy-makers and those in the legal profession.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.