This paper proposes a set of five ethical principles, together with seven high-level messages, as a basis for responsible robotics. The Principles of Robotics were drafted in 2010 and published online in 2011. Since then the principles have influenced, and continue to influence, a number of initiatives in robot ethics but have not,
This is a call for informed debate on the ethical issues raised by the forthcoming widespread use of robots, particularly in domestic settings. Research shows that humans can sometimes become very abusive towards computers and robots particularly when they are seen as human-like and this raises important ethical issues.The designers of robotic systems need to take an ethical stance on at least three specific questions. Firstly is it acceptable to treat artefacts -particularly human-like artefacts -in ways that we would consider it morally unacceptable to treat humans? Second, if so, just how much sexual or violent 'abuse' of an artificial agent should we allow before we censure the behaviour of the abuser? Thirdly is it ethical for designers to attempt to 'design out' abusive behaviour by users?Conclusions on these and related issues should be used to modify professional codes as a matter of urgency.
In this paper, we take moral agency to be that context in which a particular agent can, appropriately, be held responsible for her actions and their consequences. In order to understand moral agency, we will discuss what it would take for an artefact to be a moral agent. For reasons that will become clear over the course of the paper, we take the artefactual question to be a useful way into discussion but ultimately misleading. We set out a number of conceptual preconditions for being a moral agent and then outline how one should-and should not-go about attributing moral agency. In place of a litmus test for such agency-such as Colin Allen et al 's Moral Turing Test-we suggest some tools from conceptual spaces theory for mapping out the nature and extent of that agency.
This paper follows directly from an earlier paper where we discussed the requirements for an artifact to be a moral agent and concluded that the artifactual question is ultimately a red herring. As before, we take moral agency to be that condition in which an agent can appropriately be held responsible for her actions and their consequences. We set a number of stringent conditions on moral agency. A moral agent must be embedded in a cultural and speci¯cally moral context and embodied in a suitable physical form. It must be, in some substantive sense, alive. It must exhibit self-conscious awareness. It must exhibit sophisticated conceptual abilities, going well beyond what the likely majority of conceptual agents possess: not least that it must possess a well-developed moral space of reasons. Finally, it must be able to communicate its moral agency through some system of signs: A \private" moral world is not enough. After reviewing these conditions and pouring cold water on recent claims for having achieved \minimal" machine consciousness, we turn our attention to a number of existing and, in some cases, commonplace artifacts that lack moral agency yet nevertheless require one to take a moral stance toward them, as if they were moral agents. Finally, we address another class of agents raising a related set of issues: autonomous military robots.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.