2011
DOI: 10.1007/978-3-642-23232-9_8
|View full text |Cite
|
Sign up to set email alerts
|

Contextual Recognition of Robot Emotions

Abstract: In the field of human-robot interaction, socially interactive robots are often equipped with the ability to detect the affective states of users, the ability to express emotions through the use of synthetic facial expressions, speech and textual content, and the ability for imitating and social learning. Past work on creating robots that can make convincing emotional expressions has concentrated on the quality of those expressions, and on assessing people's ability to recognize them. Previous recognition studi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
6
0

Year Published

2011
2011
2021
2021

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 9 publications
(8 citation statements)
references
References 109 publications
(234 reference statements)
2
6
0
Order By: Relevance
“…In particular, Feara notoriously difficult emotion to convey via robotic facial expressionsincreased to nearly 100% with added context, regardless of cultural background of the subjects. These findings concur with previously reported context effects in both humans/avatars (Righart & de Gelder, 2008;Barrett et al, 2011;Lee et al 2012) as well as robots (Zhang & Sharkey, 2011). We were also able to replicate these effects in a companion study in which we looked at the effects of both incongruent and congruent context on people's perceptions of a robots affective facial expressions which showed significant differences across all emotions, except for surprise .…”
Section: Discussionsupporting
confidence: 90%
See 1 more Smart Citation
“…In particular, Feara notoriously difficult emotion to convey via robotic facial expressionsincreased to nearly 100% with added context, regardless of cultural background of the subjects. These findings concur with previously reported context effects in both humans/avatars (Righart & de Gelder, 2008;Barrett et al, 2011;Lee et al 2012) as well as robots (Zhang & Sharkey, 2011). We were also able to replicate these effects in a companion study in which we looked at the effects of both incongruent and congruent context on people's perceptions of a robots affective facial expressions which showed significant differences across all emotions, except for surprise .…”
Section: Discussionsupporting
confidence: 90%
“…Anger, Surprise) with high accuracy. Elsewhere, Zhang and Sharkey (2011) have evaluated the effects of context on robotic facial expression identification by humans, and Embgen et al (2012) have conducted robotic studies on emotional body language in lieu of facial expressions.…”
Section: The Role Of Human-robot Interactionmentioning
confidence: 99%
“…This trend varied by the valence of the expression [33]. Recent work has shown the importance of context in perceptions of robotic facial expressions across cultures as well [11,37].…”
Section: Context Congruency and Culturementioning
confidence: 91%
“…The videos presented the robot displaying one type of visual feedback without being contextualized by the environment or interaction. A previous study showed evidence that the context could affect the participant's recognition of the robot's expressions [63]. For this reason, we avoided influencing the participant's choices by adding elements related to interaction, such as by showing the happy state after hugging or the sad one after hitting.…”
Section: Questionnaire Overviewmentioning
confidence: 99%