AI has numerous applications and in various fields, including the military domain. The increase in the degree of autonomy in some decision-making systems leads to discussions on the possible future use of lethal autonomous weapons systems (LAWS). A central issue in these discussions is the assignment of moral responsibility for some AI-based outcomes. Several authors claim that the high autonomous capability of such systems leads to a so-called “responsibility gap.” In recent years, there has been a surge in philosophical literature around the concept of responsibility gaps and different solutions have been devised to close or bridge these gaps. In order to move forward in the research around LAWS and the problem of responsibility, it is important to increase our understanding of the different perspectives and discussions in this debate. This paper attempts to do so by disentangling the various arguments and providing a critical overview. After giving a brief outline of the state of the technology of LAWS, I will review the debates over responsibility gaps using three differentiators: those who believe in the existence of responsibility gaps versus those who do not, those who hold that responsibility gaps constitute a new moral problem versus those who argue they do not, and those who claim that solutions can be successful as opposed to those who believe that it is an unsolvable problem.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.