In the near future, the capabilities of commonly used artificial systems will reach a level where we will be able to permit them to make moral decisions autonomously as part of their proper daily functioning-autonomous cars, personal assistants, household robots, stock trading bots, autonomous weapons, etc. are examples of the types of systems that will deal with simple to complex moral situations that require some level of moral judgment. In the research field of machine ethics, we distinguish several types of artificial moral agents, each of which has a different level of moral agency. In this paper, we focus on the moral agency of Explicit and Full-blown artificial moral agents. We form an opinion regarding their level of moral agency, and then examine the question of whether it is morally right to align the values of (artificial) moral agents. If we assume or are able to determine that certain types of artificial agents are indeed moral agents, then we ought to examine whether it is morally right to construct them in such a way that they are "committed" to human values. We discuss an analogy to human moral agents and the implications of granting or denying moral agency from artificial agents.
Keywords Artificial intelligence • Artificial moral agent (AMA) • Moral agency • AI alignment"Fear is the main source of superstition, and one of the main sources of cruelty. To conquer fear is the beginning of wisdom." Bertrand Russell, Unpopular Essays