2022
DOI: 10.3389/fnbot.2022.918434
|View full text |Cite
|
Sign up to set email alerts
|

Learning joints relation graphs for video action recognition

Abstract: Previous video action recognition mainly focuses on extracting spatial and temporal features from videos or capturing physical dependencies among joints. The relation between joints is often ignored. Modeling the relation between joints is important for action recognition. Aiming at learning discriminative relation between joints, this paper proposes a joint spatial-temporal reasoning (JSTR) framework to recognize action from videos. For the spatial representation, a joints spatial relation graph is built to c… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 29 publications
0
1
0
Order By: Relevance
“…Video action recognition is one of the most challenging tasks in the field of computer vision [1][2][3][4][5][6][7][8][9][10]. The aim of video action recognition is to extract a large amount of action information from the raw video through effective spatio-temporal modeling techniques.…”
Section: Introductionmentioning
confidence: 99%
“…Video action recognition is one of the most challenging tasks in the field of computer vision [1][2][3][4][5][6][7][8][9][10]. The aim of video action recognition is to extract a large amount of action information from the raw video through effective spatio-temporal modeling techniques.…”
Section: Introductionmentioning
confidence: 99%