Proceedings of the 4th ACM/IEEE International Conference on Human Robot Interaction 2009
DOI: 10.1145/1514095.1514109
|View full text |Cite
|
Sign up to set email alerts
|

Footing in human-robot conversations

Abstract: During conversations, speakers establish their and others' participant roles (who participates in the conversation and in what capacity)-or "footing" as termed by Goffman-using gaze cues. In this paper, we study how a robot can establish the participant roles of its conversational partners using these cues. We designed a set of gaze behaviors for Robovie to signal three kinds of participant roles: addressee, bystander, and overhearer. We evaluated our design in a controlled laboratory experiment with 72 subjec… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
35
1

Year Published

2012
2012
2020
2020

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 275 publications
(38 citation statements)
references
References 39 publications
2
35
1
Order By: Relevance
“…Therefore, much work (e.g. [3,30,32,36]) in HRI has focused on the functions of robot gaze in conversations. Mutlu et al [36] explored the use of robot gaze behaviors to signal participant roles and manage turn-exchanges in a triadic conversation.…”
Section: Robot Gaze In Hrimentioning
confidence: 99%
See 1 more Smart Citation
“…Therefore, much work (e.g. [3,30,32,36]) in HRI has focused on the functions of robot gaze in conversations. Mutlu et al [36] explored the use of robot gaze behaviors to signal participant roles and manage turn-exchanges in a triadic conversation.…”
Section: Robot Gaze In Hrimentioning
confidence: 99%
“…[3,30,32,36]) in HRI has focused on the functions of robot gaze in conversations. Mutlu et al [36] explored the use of robot gaze behaviors to signal participant roles and manage turn-exchanges in a triadic conversation. Andrist et al [3] investigated gaze aversion and its functions of signaling cognitive effort, modulating intimacy level and managing turn-taking in human-robot conversations.…”
Section: Robot Gaze In Hrimentioning
confidence: 99%
“…In robotics, there has been study of how speaker-listener roles can be strongly shaped by controlling the single modality of eye gaze (Mutlu, Shiwa, Ishiguro, & Hagita, 2009). There has also been related work on conversational engagement.…”
Section: Related Workmentioning
confidence: 99%
“…Collaborative interactions can be improved by including nonverbal communication, such as having the robot partner recognize head nodding and respond by nodding back (Sidner Lee, Morency, & Forlines, 2006). Employing mutual gaze and gazing at objects that are the subjects of conversation can improve the conversation (Mutlu, Shiwa, Kanda, Ishiguro, & Hagita, 2009;Sidner, Kidd, Lee, & Lesh, 2004). Taking appropriate turns in verbal dialogue by using gaze, verbal cues, body language, and robot learning also makes an H-R conversation easier for the human partner (Chao & Thomaz, 2010).…”
Section: Robot Embodimentmentioning
confidence: 99%