This article is dedicated to the question of the development of the Russian-EU cooperation against organized crime and terrorism. The paper analyzes the EU-Russia Partnership and Cooperation Agreement in general and in the context of abovementioned spheres, as well as the Agreement on cooperation between the European Police Office and the Russian Federation. Moreover, the article examines other forms of cooperation and the EU/Russia Road Map on the common space of freedom, security and justice. Finally, this article reveals the problems and perspectives of the Russian-EU cooperation in the fields of organized crime and terrorism.
Threats posed to human rights by the rapid development of artificial intelligence (AI) are considered, along with some potential legal mitigations. The active efforts of the EU in the field of AI regulation seem particularly relevant for research considering its approach centred on citizens’ rights. Thus, the present study aims to describe the key features of the EU approach to regulating AI in the context of human rights protection, as well as identifying both its achievements and deficiencies, and proposing improvements to existing provisions. The presented analysis of the proposed AI Act pays special attention to provisions that set out to eliminate or mitigate the main risks and dangers of AI. The currently intensive development of AI regulation in the EU (the Presidency Compromise Text presented by the Council of the EU, amendments of the European Committee of the Regions, opinions of interested parties and human rights organisations, etc.) makes this study especially timely due to its highlighting of problematic aspects. The analysis shows that, on closer examination, the proposed law leaves many sensitive and controversial issues unsettled. In the context of AI applications, the proposed solution is considered as an emergency measure in order to rapidly integrate purportedly trustworthy AI into human society. As a result of the analysis, the authors propose potential improvements to the AI Act, including the possibility to update the lists of all types of AI, clarify the concept of transparency and eliminate the self-assessment procedure. It is also necessary to consider the potential reclassification of some AI systems currently defined as presenting limited risk as systems presenting considerable risk or prohibited systems.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.