2015
DOI: 10.1109/thms.2015.2445856
|View full text |Cite
|
Sign up to set email alerts
|

Supporting Human–Robot Interaction Based on the Level of Visual Focus of Attention

Abstract: We propose a human-robot interaction approach for social robots that attracts and controls the attention of a target person depending on her/his current visual focus of attention. The system detects the person's current task (attention) and estimates the level by using the "task-related contextual cues" and "gaze pattern." The attention level is used to determine the suitable time to attract the target person's attention toward the robot. The robot detects the interest or willingness of the target person to in… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
21
0

Year Published

2016
2016
2021
2021

Publication Types

Select...
6
3

Relationship

0
9

Authors

Journals

citations
Cited by 44 publications
(21 citation statements)
references
References 42 publications
0
21
0
Order By: Relevance
“…Dong et al developed a hybrid gaze/electroencephalography (EEG) interface which suppressed the selection of unlikely commands depending on the user's intent estimated from their natural gaze trajectories . In the field of human-robot interaction, Das, Rashed, Kobayashi, and Kuno (2015) designed a robot which was able to determine VARIABLE DWELL TIME FOR GAZE-BASED BROWSING 6 the suitable time to interact with a person by estimating the person's visual focus of attention based on the cues including gaze patterns. Huang and Mutlu (2016) studied eye gaze in a scenario where a robot delivered an item to a user who ordered the item from a menu using speech.…”
Section: Introductionmentioning
confidence: 99%
“…Dong et al developed a hybrid gaze/electroencephalography (EEG) interface which suppressed the selection of unlikely commands depending on the user's intent estimated from their natural gaze trajectories . In the field of human-robot interaction, Das, Rashed, Kobayashi, and Kuno (2015) designed a robot which was able to determine VARIABLE DWELL TIME FOR GAZE-BASED BROWSING 6 the suitable time to interact with a person by estimating the person's visual focus of attention based on the cues including gaze patterns. Huang and Mutlu (2016) studied eye gaze in a scenario where a robot delivered an item to a user who ordered the item from a menu using speech.…”
Section: Introductionmentioning
confidence: 99%
“…People naturally tend to look and focus their attention on objects which are of immediate interest [26]. Visual attention also is normally established in social contexts like a conversation between two people, and the ability to correctly simulate focusing the visual attention on the interaction partner is considered a way for robots to exhibit social intelligence and awareness and facilitate HRI [27].…”
Section: Visual and Social Attention In Children With Asd And Robotsmentioning
confidence: 99%
“…More work in this direction is ongoing, for example, [1,12]. What is still missing are robots that robustly and smoothly focus onto the user in their natural environment at home.…”
Section: Multiple Functionalities For the Active Robot Headmentioning
confidence: 99%
“…The highest developed robot with similar capability as Hobbit is Care-O-bot. It has been tested in many trials in care facilities for studying user interaction, for example, for bringing water or other assistive operations in care facilities [1]. However, this robot is too large to operate at homes; it would hardly fit through doors.…”
Section: Introductionmentioning
confidence: 99%