2023
DOI: 10.3390/s23010520
|View full text |Cite
|
Sign up to set email alerts
|

Joint-Based Action Progress Prediction

Abstract: Action understanding is a fundamental computer vision branch for several applications, ranging from surveillance to robotics. Most works deal with localizing and recognizing the action in both time and space, without providing a characterization of its evolution. Recent works have addressed the prediction of action progress, which is an estimate of how far the action has advanced as it is performed. In this paper, we propose to predict action progress using a different modality compared to previous methods: bo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
2
1

Relationship

1
5

Authors

Journals

citations
Cited by 6 publications
(2 citation statements)
references
References 64 publications
0
2
0
Order By: Relevance
“…Gestures are investigated in [ 4 ], where a method for capturing gestures automatically from videos and transforming them into stored 3D representations is proposed. In [ 5 ], the authors exploit body joints to predict action progress.…”
Section: Overview Of Contributionmentioning
confidence: 99%
“…Gestures are investigated in [ 4 ], where a method for capturing gestures automatically from videos and transforming them into stored 3D representations is proposed. In [ 5 ], the authors exploit body joints to predict action progress.…”
Section: Overview Of Contributionmentioning
confidence: 99%
“…Recognizing gestures from fewer sample videos is a challenging task because traditional 3DCNN requires more data than 2DCNN to classify. Human body joints store incredibly accurate information regarding human positions, making them a considerably more convenient and efficient way to describe activities and, consequently, how they are carried out [8]. Also, Human skeletons are easily obtained using sensors or pose recognition tools, and they are robust to changes in background and illumination [6].…”
Section: Introductionmentioning
confidence: 99%