We explore existing political commitments by states regarding the development and use of lethal autonomous weapon systems. We carry out two background reviewing efforts, the first addressing ethical and legal framings and proposals from recent academic literature, the second addressing recent formal policy principles as endorsed by states, with a focus on the principles adopted by the United States Department of Defense and the North Atlantic Treaty Organization. We then develop two conceptual case studies. The first addresses the interrelated principles of explainability and traceability, leading to proposals for acceptable scope limitations to these principles. The second considers the topic of deception in warfare and how it may be viewed in the context of ethical principles for lethal autonomous weapon systems.
This paper investigates the ethical implications of using adversarial machine learning for the purpose of obfuscation. We suggest that adversarial attacks can be justified by privacy considerations but that they can also cause collateral damage. To clarify the matter, we employ two use cases-facial recognition and medical machine learningto evaluate the collateral damage counterarguments to privacy-induced adversarial attacks. We conclude that obfuscation by data poisoning can be justified in facial recognition but not in the medical case. We motivate our conclusion by employing psychological arguments about change, privacy considerations, and purpose limitations on machine learning applications.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.