2018
DOI: 10.1515/pjbr-2018-0012
|View full text |Cite
|
Sign up to set email alerts
|

How does the robot feel? Perception of valence and arousal in emotional body language

Abstract: Human-robot interaction in social robotics applications could be greatly enhanced by robotic behaviors that incorporate emotional body language. Using as our starting point a set of pre-designed, emotion conveying animations that have been created by professional animators for the Pepper robot, we seek to explore how humans perceive their affect content, and to increase their usability by annotating them with reliable labels of valence and arousal, in a continuous interval space. We conducted an experiment wit… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
20
0

Year Published

2019
2019
2025
2025

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 28 publications
(20 citation statements)
references
References 38 publications
0
20
0
Order By: Relevance
“…Compared to previous studies of emotional expressions of robots, which resulted in a list of gestures per emotional state (Beck et al, 2010;Häring et al, 2011;Erden, 2013), this study has resulted not just in the perceived aspect of each gesture, but also a quantitative probability distribution of its FIGURE 2 | Robot gestures and attribute valence (X) probability (Y) distribution (two robots were always present in the videos). perception (Marmpena et al, 2018). This can be used not only in understanding how people perceive robots' gestures, but also to generate appropriate gestures based on required attitudes, while maintaining variability and engagement.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Compared to previous studies of emotional expressions of robots, which resulted in a list of gestures per emotional state (Beck et al, 2010;Häring et al, 2011;Erden, 2013), this study has resulted not just in the perceived aspect of each gesture, but also a quantitative probability distribution of its FIGURE 2 | Robot gestures and attribute valence (X) probability (Y) distribution (two robots were always present in the videos). perception (Marmpena et al, 2018). This can be used not only in understanding how people perceive robots' gestures, but also to generate appropriate gestures based on required attitudes, while maintaining variability and engagement.…”
Section: Discussionmentioning
confidence: 99%
“…The 7-point Likert scale ranges from "very negative" on the left to "very positive" on the right. 2010; Häring et al, 2011;Erden, 2013;Thimmesch-Gill et al, 2017;Marmpena et al, 2018). We created 16 different gestures using Aldebaran Choregraphe, based on the social meanings of nonverbal behaviors in humans (Rashotte, 2002), Figure 2.…”
Section: Methodsmentioning
confidence: 99%
“…The second part of the experiment involved all participants in a single condition. The participants were shown a video of each of the six MR arm gestures and asked to rate each gesture's valence on a decimal scale from very negative (-1.00) to very positive (+1.00), as in prior work [39], and to describe verbally, in written form, what each gesture conveyed.…”
Section: Experiments Designmentioning
confidence: 99%
“…• Emotional expression of robots: in complex interaction scenarios, such as assistive, educational, and social robotics (Fong et al, 2003;Rossi et al, 2020), the ability of robots to exhibit recognizable emotional expressions strongly impacts the resulting social interaction (Mavridis, 2015). Several studies focused on exploring which modalities (e.g., face expression, body posture, movement, voice) can convey emotional information from robots to humans and how people perceive and recognize emotional states (Tsiourti et al, 2017;Marmpena et al, 2018;Rossi and Ruocco, 2019); • Ability of robots to infer the human emotional state: robots able to infer and interpret human emotions would be more effective in interacting with people. Recent works aim to design algorithms for classifying emotional states from different input modalities, such as facial expression, body language, voice, and physiological signals (McColl et al, 2016;Cavallo et al, 2018).…”
Section: Introductionmentioning
confidence: 99%
“…Emotional expression of robots: in complex interaction scenarios, such as assistive, educational, and social robotics (Fong et al, 2003 ; Rossi et al, 2020 ), the ability of robots to exhibit recognizable emotional expressions strongly impacts the resulting social interaction (Mavridis, 2015 ). Several studies focused on exploring which modalities (e.g., face expression, body posture, movement, voice) can convey emotional information from robots to humans and how people perceive and recognize emotional states (Tsiourti et al, 2017 ; Marmpena et al, 2018 ; Rossi and Ruocco, 2019 );…”
Section: Introductionmentioning
confidence: 99%