2020
DOI: 10.1038/s41598-020-60923-5
|View full text |Cite
|
Sign up to set email alerts
|

Humans Predict Action using Grammar-like Structures

Abstract: Efficient action prediction is of central importance for the fluent workflow between humans and equally so for human-robot interaction. To achieve prediction, actions can be algorithmically encoded by a series of events, where every event corresponds to a change in a (static or dynamic) relation between some of the objects in the scene. These structures are similar to a context-free grammar and, importantly, within this framework the actual objects are irrelevant for prediction, only their relational changes m… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 9 publications
(4 citation statements)
references
References 32 publications
0
4
0
Order By: Relevance
“…Note that all methodological details concerning our spatial relations definition (section 5.4.1) and their computation (section 5.5) as well as details of the similarity measurement algorithm (section 5.6) were reported previously in [23] and [28]. Hence, the next three subsections are essentially a repetition from those two papers without many changes.…”
Section: Details Of Machine Action Predictionmentioning
confidence: 98%
See 1 more Smart Citation
“…Note that all methodological details concerning our spatial relations definition (section 5.4.1) and their computation (section 5.5) as well as details of the similarity measurement algorithm (section 5.6) were reported previously in [23] and [28]. Hence, the next three subsections are essentially a repetition from those two papers without many changes.…”
Section: Details Of Machine Action Predictionmentioning
confidence: 98%
“…The approach was developed based on previous assumptions on the importance of spatial relations in action recognition [24][25][26][27][28] and stands in contrast to action recognition and prediction methods based on time continuous information, like trajectories [29][30][31][32] or continuous action videos [33][34][35]. It also stands in contrast to the methods exploiting rich contextual information [36][37][38][39][40].…”
Section: Introductionmentioning
confidence: 99%
“…We trained our algorithm on 7 different trajectories performing ordinary robotic tasks. The tasks are hide, unhide, move down, move up, pick and place, put on top and take down (see e.g., Wörgötter et al (2020)). Movement data is given in 3 dimensional Cartesian coordinates, resulting in three outputs or -biologically speaking -in three rate coded output neurons.…”
Section: Stability and Output Learningmentioning
confidence: 99%
“…The tasks are hide, unhide, move down, move up, pick and place, put on top and take down (see e.g. Wörgötter et al, 2020). Movement data is given in 3 dimensional Cartesian coordinates, resulting in three outputs orbiologically speaking -in three rate coded output neurons.…”
Section: Stability and Output Learningmentioning
confidence: 99%