2013 8th ACM/IEEE International Conference on Human-Robot Interaction (HRI) 2013
DOI: 10.1109/hri.2013.6483614
|View full text |Cite
|
Sign up to set email alerts
|

Are you looking at me? Perception of robot attention is mediated by gaze type and group size

Abstract: Studies in HRI have shown that people follow and understand robot gaze. However, only a few studies to date have examined the time-course of a meaningful robot gaze, and none have directly investigated what type of gaze is best for eliciting the perception of attention. This paper investigates two types of gaze behaviors-short, frequent glances and long, less frequent stares-to find which behavior is better at conveying a robot's visual attention. We describe the development of a programmable research platform… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
16
0
1

Year Published

2015
2015
2024
2024

Publication Types

Select...
3
3
1

Relationship

0
7

Authors

Journals

citations
Cited by 34 publications
(18 citation statements)
references
References 24 publications
1
16
0
1
Order By: Relevance
“…2.1. Section 2.2 evaluates six existing system architectures identified from a review of 32 articles from the human-robot interaction literature [1,2,4,7,8,10,19,24,26,[28][29][30][31][32][33][34][37][38][39][40]44,46,49,50,53,57,58,[61][62][63]67,70] to determine if they can be characterized as a reference architecture. These 32 articles capture atleast.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…2.1. Section 2.2 evaluates six existing system architectures identified from a review of 32 articles from the human-robot interaction literature [1,2,4,7,8,10,19,24,26,[28][29][30][31][32][33][34][37][38][39][40]44,46,49,50,53,57,58,[61][62][63]67,70] to determine if they can be characterized as a reference architecture. These 32 articles capture atleast.…”
Section: Related Workmentioning
confidence: 99%
“…The reference architecture provides a computational mapping between different communicative and nonverbal functions of human social head gaze to the expression of one or more discrete robot head gaze actions (range, speed, and frequency). It embodies architectural best practices gathered from the design and development of various robotics applications that implement social head gaze, such as: healthcare [24,37], victim management [8,19], robot guides [7,38,50,62,70], entertainment [10,26,28,29,44,49], telepresence [39,63], and fundamental research [2,[31][32][33]40,57,61,67].…”
Section: Introductionmentioning
confidence: 99%
“…Following, it is reasonable to expect technologist students to likewise experience these social reactions to robots, and to positively respond to robots that use human-like social interaction techniques to communicate. Little motivation is then needed to introduce social aspects when studying how robots work with people, and students can be expected to quickly understand, for example, why it may be useful for a robot to have eyes for communicating to people, and why we should study how a robot can use these eyes to give gaze cues to inform a human collaborator of where it will move next (Admoni, Hayes, Feil-Seifer, Ullman, & Scassellati, 2013). Therefore, HRI can be a particularly effective tool for initiating and engaging socially grounded discussion.…”
Section: Using Hri To Expose Technologists To the Social Aspects Of Tmentioning
confidence: 99%
“…The authors claim that, in general, the gaze cue led to better performance and even better with Robovie than with Geminoid. A study by Admoni et al [1] examined the features that make a robot appear to be attending to someone; their findings reveal that people recognize shorter, frequent fixations from a robot than longer, less frequent cues. Authors in [28] examined how people perceive gaze cues and head angles directed towards different target positions on a table when a human and NAO robot are sitting against each other as in board game scenarios.…”
Section: Related Workmentioning
confidence: 99%
“…While a lot of gaze research has taken a human-centered approach to examine the ability of humans to read and perceive social cues from robot gaze [1,11,13,26,34], many questions remain unclear, particularly on how children perceive and respond to gaze cues, and whether they are able to attribute intentions to a robot's gaze cues during child-robot interaction. In this paper, we examine whether children read/notice gaze hints in humanoid robots; if so, whether they they are able to interpret these cues appropriately and, finally, whether and under which conditions these social cues impact their performance and their cognitions about the robot.…”
Section: Introductionmentioning
confidence: 99%