2020
DOI: 10.1007/978-3-030-58452-8_41
|View full text |Cite
|
Sign up to set email alerts
|

Forecasting Human-Object Interaction: Joint Prediction of Motor Attention and Actions in First Person Video

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
80
0
1

Year Published

2020
2020
2022
2022

Publication Types

Select...
5
5

Relationship

1
9

Authors

Journals

citations
Cited by 93 publications
(81 citation statements)
references
References 56 publications
0
80
0
1
Order By: Relevance
“…2D human-object interaction. Perceiving human-object interactions in 2D images has been studied extensively [5,9,10,15,16,22,23,24,35,39,40,43]. Gkioxari et al [10] detect (human, verb, object) triplets using human appearance as cues to localize interacted objects.…”
Section: Related Workmentioning
confidence: 99%
“…2D human-object interaction. Perceiving human-object interactions in 2D images has been studied extensively [5,9,10,15,16,22,23,24,35,39,40,43]. Gkioxari et al [10] detect (human, verb, object) triplets using human appearance as cues to localize interacted objects.…”
Section: Related Workmentioning
confidence: 99%
“…Egocentric Computer Vision Egocentric vision has been studied in various applications. To name a few, understanding human actions from egocentric cameras, including action/activity recognition [9,35,28,49,36,51,15,52], action anticipation [43,16], and human object interaction [26] have been widely studied. Egocentric hand detection/segmentation [23,22,1,42], and pose estimation [54,66,31] are among other applications.…”
Section: Related Workmentioning
confidence: 99%
“…The main focus of these works is to extract relevant information from the observations to predict the label of the action starting in τ seconds, varying between zero [32] to 10s of seconds [33]. Other models leverage external cues such as hand movements to help with the anticipation task [34,35].…”
Section: Related Workmentioning
confidence: 99%