Social robotics entertains a particular relationship with anthropomorphism, which it neither sees as a cognitive error, nor as a sign of immaturity. Rather it considers that this common human tendency, which is hypothesized to have evolved because it favored cooperation among early humans, can be used today to facilitate social interactions between humans and a new type of cooperative and interactive agents – social robots. This approach leads social robotics to focus research on the engineering of robots that activate anthropomorphic projections in users. The objective is to give robots “social presence” and “social behaviors” that are sufficiently credible for human users to engage in comfortable and potentially long-lasting relations with these machines. This choice of ‘applied anthropomorphism’ as a research methodology exposes the artifacts produced by social robotics to ethical condemnation: social robots are judged to be a “cheating” technology, as they generate in users the illusion of reciprocal social and affective relations. This article takes position in this debate, not only developing a series of arguments relevant to philosophy of mind, cognitive sciences, and robotic AI, but also asking what social robotics can teach us about anthropomorphism. On this basis, we propose a theoretical perspective that characterizes anthropomorphism as a basic mechanism of interaction, and rebuts the ethical reflections that a priori condemns “anthropomorphism-based” social robots. To address the relevant ethical issues, we promote a critical experimentally based ethical approach to social robotics, “synthetic ethics,” which aims at allowing humans to use social robots for two main goals: self-knowledge and moral growth.
Most authors view trust as a psychological disposition that motivates an agent to cooperative behavior in circumstances where the consequences of the agent’s action are uncertain and especially when this uncertainty depends on the reaction of other agents. This article criticizes this general approach and argues that trust is best seen as the action rather than as whatever it is that motivates it. Trust is defined as an act through which an agent gives to another agent power over him or herself.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.