Proceedings of the 2014 ACM/IEEE International Conference on Human-Robot Interaction 2014
DOI: 10.1145/2559636.2559668
|View full text |Cite
|
Sign up to set email alerts
|

Learning-based modeling of multimodal behaviors for humanlike robots

Abstract: In order to communicate with their users in a natural and effective manner, humanlike robots must seamlessly integrate behaviors across multiple modalities, including speech, gaze, and gestures. While researchers and designers have successfully drawn on studies of human interactions to build models of humanlike behavior and to achieve such integration in robot behavior, the development of such models involves a laborious process of inspecting data to identify patterns within each modality or across modalities … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

3
51
0

Year Published

2014
2014
2024
2024

Publication Types

Select...
5
4
1

Relationship

0
10

Authors

Journals

citations
Cited by 74 publications
(54 citation statements)
references
References 36 publications
3
51
0
Order By: Relevance
“…Eye contact and head gaze give the impression that the robot is listening and paying attention to what people say or do (Andrist, Tan, Gleicher, & Mutlu, 2014;Hoffman et al, 2008;Huang & Mutlu, 2014;Srinivasan, Murphy, & Bethel, 2015).  Coherence between internal state and non-verbal expressions: The robot actor should define what gesture, body pose or physical action best describes the internal state and its context (Hoffman, 2011;Simmons et al, 2011).…”
Section:  Natural Movementsmentioning
confidence: 99%
“…Eye contact and head gaze give the impression that the robot is listening and paying attention to what people say or do (Andrist, Tan, Gleicher, & Mutlu, 2014;Hoffman et al, 2008;Huang & Mutlu, 2014;Srinivasan, Murphy, & Bethel, 2015).  Coherence between internal state and non-verbal expressions: The robot actor should define what gesture, body pose or physical action best describes the internal state and its context (Hoffman, 2011;Simmons et al, 2011).…”
Section:  Natural Movementsmentioning
confidence: 99%
“…Quantitative evaluation showed that the LDCRF models achieved the best performance, underlying the importance of learning the dynamics between different gesture classes and the hidden internal orchestration of the gestures. Huang et al [28] explored how a learning-based approach meant to model multimodal behaviors might address the limitations of heuristic-based models. They used DBNs to model the coordination of speech, gaze, and gesture behaviors in narration.…”
Section: Related Workmentioning
confidence: 99%
“…Huang and Mutlu found that participants' recall of items in a factual talk presented by a robot was reliably improved if the robot used deictic gestures, while other types of gesture had little impact [21]. Bremner et al found that parts of a monologue accompanied by (metaphoric and beat) gestures were not recalled any better than those without, though higher certainty in the information recalled by the gestures was observed [22].…”
Section: B Gestures In Human-robot Interactionmentioning
confidence: 99%