2018
DOI: 10.3389/fnbot.2018.00007
|View full text |Cite
|
Sign up to set email alerts
|

Learning Semantics of Gestural Instructions for Human-Robot Collaboration

Abstract: Designed to work safely alongside humans, collaborative robots need to be capable partners in human-robot teams. Besides having key capabilities like detecting gestures, recognizing objects, grasping them, and handing them over, these robots need to seamlessly adapt their behavior for efficient human-robot collaboration. In this context we present the fast, supervised Proactive Incremental Learning (PIL) framework for learning associations between human hand gestures and the intended robotic manipulation actio… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
12
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 16 publications
(12 citation statements)
references
References 32 publications
0
12
0
Order By: Relevance
“…Reference [ 11 ] performed excellent work in the opposite direction by proposing a method to interpret robot behavior as intention signals using natural language sentences, so as to better reveal robot behaviors and reduce misunderstandings caused by information asymmetry. Reference [ 12 ] proposed the proactive incremental learning (PIL) framework that learned the connection between human gestures and robot actions, which contributes to efficient human-robot interaction. In 2019, Ref.…”
Section: Related Workmentioning
confidence: 99%
“…Reference [ 11 ] performed excellent work in the opposite direction by proposing a method to interpret robot behavior as intention signals using natural language sentences, so as to better reveal robot behaviors and reduce misunderstandings caused by information asymmetry. Reference [ 12 ] proposed the proactive incremental learning (PIL) framework that learned the connection between human gestures and robot actions, which contributes to efficient human-robot interaction. In 2019, Ref.…”
Section: Related Workmentioning
confidence: 99%
“…It is important for the robot, in this case, to understand the user's intentions and expectations in the reception of the object. In [42], the robot needs to understand the object desired by the users among the available ones and hand it over. In different terms, in [18] the robot has to adjust to a change in the intentions of the user while receiving the object.…”
Section: Collaborative Tasksmentioning
confidence: 99%
“…They are primarily chosen by considering the state of the interaction at the current time and assessing the return achieved by taking an action rather than another one. This statement inherently implies how well this way of modelling the interaction is prone to be tackled through reinforcement learning [21,42,43] and graphical models [19,45] (see Section 5). In an alternative interpretation of this, chooses ranges of interaction levels for the robot, instead of specific actions to perform [38].…”
Section: Robot's Cognitive Capabilitiesmentioning
confidence: 99%
See 1 more Smart Citation
“…Human machine interfaces (HMIs) are becoming increasingly widespread with applications spanning from assistive devices for disability, muscle rehabilitation, prosthesis control, remote manipulation, and gaming controllers (McKirahan and Guccione, 2016;Boy, 2017;Beckerle et al, 2018). Being the hand extremely important in one's life, an entire field of HMI is dedicated to hand gesture recognition applications (Arapi et al, 2018;Shukla et al, 2018). Generally, visual, electromyographic, or inertial sensors are the most used technologies for detecting hand gestures (Cho et al, 2017;Ghafoor et al, 2017;Bisi et al, 2018;Polfreman, 2018).…”
Section: Introductionmentioning
confidence: 99%