The removal of direct human involvement from the decision to apply lethal force is at the core of the controversy surrounding autonomous weapon systems, as well as broader applications of artificial intelligence and related technologies to warfare. Far from purely a technical question of whether it is possible to remove soldiers from the ‘pointy end’ of combat, the emergence of autonomous weapon systems raises a range of serious ethical, legal, and practical challenges that remain largely unresolved by the international community. The international community has seized on the concept of ‘meaningful human control’. Meeting this standard will require doctrinal and operational, as well as technical, responses at the design stage. This paper focuses on the latter, considering how value sensitive design could assist in ensuring that autonomous systems remain under the meaningful control of humans. However, this article will also challenge the tendency to assume a universalist perspective when discussing value sensitive design. By drawing on previously unpublished quantitative data, this paper will critically examine how perspectives of key ethical considerations, including conceptions of meaningful human control, differ among policymakers and scholars in the Asia Pacific. Based on this analysis, this paper calls for the development of a more culturally inclusive form of value sensitive design and puts forward the basis of an empirically-based normative framework for guiding designers of autonomous systems.
Despite the growing breadth of research related to the perceived risks and benefits of Autonomous Weapon Systems (AWS), there remains a dearth of research into understanding how perceptions of AWS among military officers are affected by design factors. This paper demonstrates that ease of use, and user perception of the concept of using an autonomous weapon system, would be less of a barrier to trusted deployment by this emerging generation of military leaders than ensuring that autonomous systems have robust, transparent and reliable decision-making processes and that operators or supervisors are able to meaningfully monitor the systems nominally under their command. The core contribution of this paper addresses the question of how deliberate design choices could improve or diminish the capacity of junior officers to exercise meaningful human control over autonomous systems.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.