The theme of self-produced weapons intertwines diversified ideas of an ethical, legal, engineering and data science nature. The critical starting point concerns the use of 3D printing for the self-production of weapons: the doctrinal and ethical discussion is open, while from a case-law point of view no published decisions have been found. From a technical point of view it should be noted that, being produced with materials other than metal, the weapons in question would increase their danger, since it would not be possible to ascertain their possession through metal detectors. This possibility demonstrates how the combination of the application of 3D printing and AI can lead to further development of Autonomous Weapon Systems, especially drones, which are no longer confined to science fiction novels, but may appear on the market for goods and even become available for mass consumption, and it stresses the need for the promotion of negotiations for the drafting of an international treaty banning the production and use of lethal autonomous weapons. The combination of such printers with biometric facial recognition algorithms raises concerns for the increasing issues of physical, individual and collective safety that may arise. In fact, the biometric recognition technology allows the identification of individuals through the measurement and analysis of the somatic or behavioural traits; it is based on intelligent software, modelled on the human ability to recognize and identify faces by collecting and analysing huge amounts of data, and it is able to evolve its skills beyond its programmer’s initial intention. It is clear that allowing self-production of such devices by non-expert users could produce more damages than benefits. The purpose of this contribution is to study how to regulate the effects of such self-made autonomous robots, since their use may have a devastating and disruptive effect on public integrity and social peace, especially in case of violent riots.
In recent years, the need for regulation of robots and Artificial Intelligence, together with the urgency of reshaping the civil liability framework, has become apparent in Europe. Although the matter of civil liability has been the subject of many studies and resolutions, multiple attempts to harmonize EU tort law have been unsuccessful so far, and only the liability of producers for defective products has been harmonized so far. In 2021, by publishing the AI Act proposal, the European Commission reached the goal to regulate AI at the European level, classifying smart robots as ”high-risk systems”. This new piece of legislation, albeit tackling important issues, does not focus on liability rules. However, regulating the responsibility of developers and manufacturers of robots and AI systems, in order to avoid a fragmented legal framework across the EU and an uneven application of liability rules in each Member State, is still an important issue that raises many concerns in the industry sector. In particular, deep learning techniques need to be carefully regulated, as they challenge the traditional liability paradigm: it is often not possible to know the reason behind the output given by those models, and neither the programmer nor the manufacturer is able to predict the AI behavior. For this reason, some authors have argued that we need to take liability away from producers and programmers when robots are capable of acting autonomously from their original design, while others have proposed a strict liability regime. This article explores liability issues about AI and robots with regards to users, producers, and programmers, especially when the use of machine learning techniques is involved, and suggests some regulatory solutions for European lawmakers.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.