Artificial Intelligence (AI) is rapidly transforming the criminal justice system. One of the promising applications of AI in this field is the gathering and processing of evidence to investigate and prosecute crime. Despite its great potential, AI evidence also generates novel challenges to the requirements in the European criminal law landscape. This study aims to contribute to the burgeoning body of work on AI in criminal justice, elaborating upon an issue that has not received sufficient attention: the challenges triggered by AI evidence in criminal proceedings. The analysis is based on the norms and standards for evidence and fair trial, which are fleshed out in a large amount of European case law. Through the lens of AI evidence, this contribution aims to reflect on these issues and offer new perspectives, providing recommendations that would help address the identified concerns and ensure that the fair trial standards are effectively respected in the criminal courtroom.
EU data protection rules could be difficult for researchers to navigate, particularly when processing massive datasets containing personal data for Artificial Intelligence (AI) developments. This article examines how data protection intersects with AI research to elucidate the issues arising from the use of large-scale databases containing personal data to train, test and validate AI systems. The key objectives of this work are to (1) scrutinise the data protection requirements and limits for the processing of personal data in AI research, (2) reflect on possible complications regarding data quality requirements for trustworthy AI and General Data Protection Regulation (GDPR) compliance, and (3) present possible ways forward to reconcile GDPR requirements and AI research. While reviewing and mapping relevant provisions and guidance, we identify data protection challenges posed by the use of massive databases containing personal data for AI research. The findings suggest that, while the legal regime for research under the GDPR resolves some of the challenges identified, others, such as legal basis for processing and processing of special categories of data, remain unaddressed. We argue that the nature of these complications will make it difficult for EU researchers to advance in trustworthy AI efforts. The analysis concludes by suggesting possible ways to tackle the remaining issues.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.