In the research field of machine ethics, we commonly categorize artificial moral agents into four types, with the most advanced referred to as a full ethical agent, or sometimes a full-blown Artificial Moral Agent (AMA). This type has three main characteristics: autonomy, moral understanding and a certain level of consciousness, including intentional mental states, moral emotions such as compassion, the ability to praise and condemn, and a conscience. This paper aims to discuss various aspects of full-blown AMAs and presents the following argument: the creation of full-blown artificial moral agents, endowed with intentional mental states and moral emotions, and trained to align with human values, does not, by itself, guarantee that these systems will have human morality. Therefore, it is questionable whether they will be inclined to honor and follow what they perceive as incorrect moral values. we do not intend to claim that there is such a thing as a universally shared human morality, only that as there are different human communities holding different sets of moral values, the moral systems or values of the discussed artificial agents would be different from those held by human communities, for reasons we discuss in the paper.