2018
DOI: 10.1109/lra.2018.2861569
|View full text |Cite
|
Sign up to set email alerts
|

Action Anticipation: Reading the Intentions of Humans and Robots

Abstract: Humans have the fascinating capacity of processing non-verbal visual cues to understand and anticipate the actions of other humans. This "intention reading" ability is underpinned by shared motor-repertoires and action-models, which we use to interpret the intentions of others as if they were our own.We investigate how the different cues contribute to the legibility of human actions during interpersonal interactions. Our first contribution is a publicly available dataset with recordings of human body-motion an… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
35
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 70 publications
(35 citation statements)
references
References 29 publications
0
35
0
Order By: Relevance
“…It is argued that mutual interaction between humans and humans is needed if robots should be considered as partners instead of as tools [ 17 , 19 , 30 , 35 , 36 , 37 ], but to what extent they need to grasp the intentions of others is a much debated issue. However, at least it is argued that to achieve some kind of action and intention recognition between humans and robots, which possibly is a pre-requisite for some basic social interaction skills [ 24 , 26 , 28 , 30 ], is necessary for developing into engaging in more advanced forms of social interaction such as joint actions and mutual collaboration [ 38 , 39 , 40 , 41 , 42 , 43 , 44 , 45 , 46 , 47 , 48 , 49 , 50 , 51 , 52 , 53 , 54 , 55 ]. In other words, it requires that robots are able to perceive similar emotional and behavioral patterns and environmental cues as humans do (e.g., [ 1 , 5 , 53 , 54 , 55 ]).…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…It is argued that mutual interaction between humans and humans is needed if robots should be considered as partners instead of as tools [ 17 , 19 , 30 , 35 , 36 , 37 ], but to what extent they need to grasp the intentions of others is a much debated issue. However, at least it is argued that to achieve some kind of action and intention recognition between humans and robots, which possibly is a pre-requisite for some basic social interaction skills [ 24 , 26 , 28 , 30 ], is necessary for developing into engaging in more advanced forms of social interaction such as joint actions and mutual collaboration [ 38 , 39 , 40 , 41 , 42 , 43 , 44 , 45 , 46 , 47 , 48 , 49 , 50 , 51 , 52 , 53 , 54 , 55 ]. In other words, it requires that robots are able to perceive similar emotional and behavioral patterns and environmental cues as humans do (e.g., [ 1 , 5 , 53 , 54 , 55 ]).…”
Section: Introductionmentioning
confidence: 99%
“…In the fields of HRI and robotics, there has been a lot of research on robots identifying, understanding, and predicting human intention and actions (e.g., [ 1 , 17 , 18 , 19 , 20 , 24 , 33 , 39 , 40 , 41 , 45 , 50 , 51 , 52 ]). It should be noted, however, that there is little existent work where robots are able to fully satisfy the requirements for having recognition capacities, although they display some aspect of recognition.…”
Section: Introductionmentioning
confidence: 99%
“…In the process of human motion prediction, the robot must quickly infer the intention and future position of other human partners in HRC [26]- [33]. In order to predict human intentions, Ferrer and Sanfeliu denoted a complete probabilistic framework that consists of prediction algorithm, behavior estimator, and intentionality predictor [31].…”
Section: Related Workmentioning
confidence: 99%
“…Additional information can be inferred from other modalites, since studies show that people convey considerable amount of information through non-verbal cues [21]. For instance, through gaze [13,1], head movements [25], and gestures [6]. Our focus is on combining three modalities -speech, head movements, and pointing gestures.…”
Section: Related Workmentioning
confidence: 99%
“…Recent studies focused on intent recognition by combining different features from speech with gaze fixations [1], head movements [25], and gestures [6]. However, in non-guided natural human-robot interaction this approach has its own limitations.…”
Section: Introductionmentioning
confidence: 99%