Perceived humanness affects how others behave towards artificial agents, and can be examined with imitation games, where participants interact with either a human or an artificial agent and then have to guess who they interacted with. The current experiment uses the multiplayer, socially interactive video game "Don’t Starve Together" to examine whether it is possible for participants (Player 1) to distinguish a human from an artificial agent (Player 2) based on how they make decisions in complex environments. We explore how sensitive participants are at discriminating human from AI behavior, and whether Player 2’s actual and/or perceived humanness has an impact on the participants’ willingness to cooperate with them. The results show that while participants were good at correctly identifying AI behavior, they misidentified the human player with a high likelihood. Cooperation during the game was higher for participants who correctly identified Player 2 (as either human or AI), and participants always preferred to cooperate with Player 2 when they thought they were human. The willingness to cooperate with Player 2 in future interactions was higher when participants misidentified Player 2, and was not impacted by the actual or perceived humanness of Player 2.
Perceived humanness affects how others behave towards artificial agents, and can be examined with imitation games, where participants interact with either a human or an artificial agent and then have to guess who they interacted with. The current experiment uses the multiplayer, socially interactive videogame Don't Starve Together to examine whether it is possible for participants (Player 1) to distinguish a human from an artificial agent (Player 2) based on how they make decisions in complex environments. We explore how sensitive participants are at discriminating human from AI behavior, and whether Player 2's actual and/or perceived humanness has an impact on the participants' willingness to cooperate with them. The results show that while participants were good at correctly identifying AI behavior, they misidentified the human player with a high likelihood. Cooperation during the game was higher for participants who correctly identified Player 2 (as either human or AI), and participants always preferred to cooperate with Player 2 when they thought they were human. The willingness to cooperate with Player 2 in future interactions was higher when participants misidentified Player 2, and was not impacted by the actual or perceived humanness of Player 2.
As humanoid robots become more advanced and commonplace, the average user may perceive their robotic companion as human-like entities that can make social decisions, such as the deliberate choice to act fairly or selfishly. It is important for scientists and designers to consider how this will affect our interactions with social robots. The current paper explores how social decision making with humanoid robots changes as the degree of their human-likeness changes. For that purpose, we created a spectrum of human-like agents via morphing that ranged from very robot-like to very human-like in physical appearance (i.e., in increments of 20%) and measured how this change in physical humanness affected decision-making in two economic games: the Ultimatum Game (Experiment 1) and Trust Game (Experiment 2). We expected increases in human-like appearance to lead to higher rates of punishment for unfair offers and higher ratings of trust in both games. While physical humanness did not have an impact on economic decisions in either of the ex-periments, follow-up analyses showed that both subjective ratings of trust and agent approachability medi-ated the effect of agent appearance on decision-making in both experiments. Possible consequences of these findings for human-robot interactions are discussed.
As humanoid robots become more advanced and commonplace, the average user may find themselves in the position of wondering if their robotic companion truly possesses a mind. It is important for scientists and designers to consider how this will likely affect our interactions with social robots. The current paper explores how social decision making with humanoid robots changes as the degree of their human-likeness changes. For that purpose, we created a spectrum of human-like agents via morphing that ranged from very robot-like to very human-like in physical appearance (in increments of 20%) and measured how this change in physical humanness affected decision-making in two economic games: Ultimatum Game (Experiment 1) and Trust Game (Experiment 2). We expected increases in human-like appearance to lead to a higher rate of punishment for unfair offers in the Ultimatum Game, and to a higher rate of trust in the Trust Game. While physical humanness did not have an impact on economic decisions in either of the experiments, follow-up analyses showed that both subjective ratings of trust and agent approachability mediated the effect of agent appearance on decision-making in both experiments. Possible consequences of these findings for human- robot interactions are discussed.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.