RO-MAN 2009 - The 18th IEEE International Symposium on Robot and Human Interactive Communication 2009
DOI: 10.1109/roman.2009.5326199
|View full text |Cite
|
Sign up to set email alerts
|

Feedback interpretation based on facial expressions in human-robot interaction

Abstract: Abstract-In everyday conversation besides speech people also communicate by means of nonverbal cues. Facial expressions are one important cue, as they can provide useful information about the conversation, for instance, whether the interlocutor seems to understand or appears to be puzzled. Similarly, in human-robot interaction facial expressions also give feedback about the interaction situation.We present a Wizard of Oz user study in an object-teaching scenario where subjects showed several objects to a robot… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
18
0

Year Published

2010
2010
2020
2020

Publication Types

Select...
4
2

Relationship

3
3

Authors

Journals

citations
Cited by 10 publications
(19 citation statements)
references
References 15 publications
1
18
0
Order By: Relevance
“…The order of the trials was counterbalanced. Based on findings from previous research [3], all other utterances were distributed equally in all scripts ("Pardon?" twice, "I don't know the word" once, etc.).…”
Section: A Experimental Conditions and Scriptsmentioning
confidence: 99%
“…The order of the trials was counterbalanced. Based on findings from previous research [3], all other utterances were distributed equally in all scripts ("Pardon?" twice, "I don't know the word" once, etc.).…”
Section: A Experimental Conditions and Scriptsmentioning
confidence: 99%
“…The video database used in this paper is the object teaching corpus presented by Lang et al [12]. It contains videos of people interacting with the robot "Biron" [11] in an object-teaching scenario.…”
Section: Video Databasementioning
confidence: 99%
“…On the other hand, the classification problem is expected to be hard, as the average human performance is only 82% [12] (please see section 5). For the subsequent investigations, we used only the best performing variant of each feature (marked in bold in table 1), except for the gabor energy filters, where we used variant "gab-4" instead of "gab-8" because of the lower feature vector dimensionality (640 compared to 2,560) and the only marginal difference in the classification rate (0.3% means just one more video classified correctly).…”
Section: Meta Parameter Selectionmentioning
confidence: 99%
See 2 more Smart Citations