2021
DOI: 10.1075/is.20023.hin
|View full text |Cite
|
Sign up to set email alerts
|

Why robots should be technical

Abstract: Research in social robotics is commonly focused on designing robots that imitate human behavior. While this might increase a user’s satisfaction and acceptance of robots at first glance, it does not automatically aid a non-expert user in naturally interacting with robots, and might hurt their ability to correctly anticipate a robot’s capabilities. We argue that a faulty mental model, that the user has of the robot, is one of the main sources of confusion. In this work, we investigate how co… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
7
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 17 publications
(7 citation statements)
references
References 34 publications
0
7
0
Order By: Relevance
“…Thus, while the interaction at the level of the scene depicted seems to progress rather effortlessly, making use of intuitive human interaction strategies that are strengthened by the anthropomorphization of the robot, the interaction at the "raw artifact level" requires explicit reasoning processes in order to try to find an explanation of the (unexpected) robot behavior. In line with this, studies indicate that during human-robot interaction (HRI) the interaction with a robot is facilitated when the users had a better understanding of the architecture, that is, the raw artifact, and thus were better able to derive the reasons for interaction errors (Hindemith, Göpfert, Wiebel-Herboth, Wrede, & Vollmer, 2021). Moreover, higher anthropomorphism scores, that is, perceiving the robot as more human-like, were associated with a decreased understanding of interaction errors (Hindemith et al, 2021) and less interaction success (Hindemith et al, 2021), suggesting that a convincingly depicted scene, as indicated by high anthropomorphism scores, hindered the correct processing of the raw artifact.…”
Section: Social Robots and The Intentional Stancementioning
confidence: 89%
See 1 more Smart Citation
“…Thus, while the interaction at the level of the scene depicted seems to progress rather effortlessly, making use of intuitive human interaction strategies that are strengthened by the anthropomorphization of the robot, the interaction at the "raw artifact level" requires explicit reasoning processes in order to try to find an explanation of the (unexpected) robot behavior. In line with this, studies indicate that during human-robot interaction (HRI) the interaction with a robot is facilitated when the users had a better understanding of the architecture, that is, the raw artifact, and thus were better able to derive the reasons for interaction errors (Hindemith, Göpfert, Wiebel-Herboth, Wrede, & Vollmer, 2021). Moreover, higher anthropomorphism scores, that is, perceiving the robot as more human-like, were associated with a decreased understanding of interaction errors (Hindemith et al, 2021) and less interaction success (Hindemith et al, 2021), suggesting that a convincingly depicted scene, as indicated by high anthropomorphism scores, hindered the correct processing of the raw artifact.…”
Section: Social Robots and The Intentional Stancementioning
confidence: 89%
“…In line with this, studies indicate that during human-robot interaction (HRI) the interaction with a robot is facilitated when the users had a better understanding of the architecture, that is, the raw artifact, and thus were better able to derive the reasons for interaction errors (Hindemith, Göpfert, Wiebel-Herboth, Wrede, & Vollmer, 2021). Moreover, higher anthropomorphism scores, that is, perceiving the robot as more human-like, were associated with a decreased understanding of interaction errors (Hindemith et al, 2021) and less interaction success (Hindemith et al, 2021), suggesting that a convincingly depicted scene, as indicated by high anthropomorphism scores, hindered the correct processing of the raw artifact. These findings are in line with neurobiological investigations of HRI showing that brain regions associated with theorizing about another agent's putative intentions were increasingly engaged the more human-like the scene was depicted Krach et al, 2008).…”
Section: Social Robots and The Intentional Stancementioning
confidence: 89%
“…Figure 8 shows the broken line of the relationship between fitness and iteration times. It can be seen that fitness decreases with the increase of iteration and does not decrease after reaching a certain value [19,20]. See Figure 8.…”
Section: Function Evaluationmentioning
confidence: 94%
“…Thus, while the interaction at the level of the scene depicted seems to progress rather effortlessly, making use of intuitive human interaction strategies that are strengthened by the anthropomorphization of the robot, the interaction at the “raw artifact level” requires explicit reasoning processes in order to try to find an explanation of the (unexpected) robot behavior. In line with this, studies indicate that during human–robot interaction (HRI) the interaction with a robot is facilitated when the users had a better understanding of the architecture, that is, the raw artifact, and thus were better able to derive the reasons for interaction errors (Hindemith, Göpfert, Wiebel-Herboth, Wrede, & Vollmer, 2021). Moreover, higher anthropomorphism scores, that is, perceiving the robot as more human-like, were associated with a decreased understanding of interaction errors (Hindemith et al, 2021) and less interaction success (Hindemith et al, 2021), suggesting that a convincingly depicted scene, as indicated by high anthropomorphism scores, hindered the correct processing of the raw artifact.…”
mentioning
confidence: 94%
“…In line with this, studies indicate that during human–robot interaction (HRI) the interaction with a robot is facilitated when the users had a better understanding of the architecture, that is, the raw artifact, and thus were better able to derive the reasons for interaction errors (Hindemith, Göpfert, Wiebel-Herboth, Wrede, & Vollmer, 2021). Moreover, higher anthropomorphism scores, that is, perceiving the robot as more human-like, were associated with a decreased understanding of interaction errors (Hindemith et al, 2021) and less interaction success (Hindemith et al, 2021), suggesting that a convincingly depicted scene, as indicated by high anthropomorphism scores, hindered the correct processing of the raw artifact. These findings are in line with neurobiological investigations of HRI showing that brain regions associated with theorizing about another agent's putative intentions were increasingly engaged the more human-like the scene was depicted (Hegel, Krach, Kircher, Wrede, & Sagerer, 2008; Krach et al, 2008).…”
mentioning
confidence: 94%