2020
DOI: 10.1155/2020/8717942
|View full text |Cite
|
Sign up to set email alerts
|

Research on Discriminative Skeleton-Based Action Recognition in Spatiotemporal Fusion and Human-Robot Interaction

Abstract: A novel posture motion-based spatiotemporal fused graph convolutional network (PM-STGCN) is presented for skeleton-based action recognition. Existing methods on skeleton-based action recognition focus on independently calculating the joint information in single frame and motion information of joints between adjacent frames from the human body skeleton structure and then combine the classification results. However, that does not take into consideration of the complicated temporal and spatial relationship of the… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
5
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(5 citation statements)
references
References 13 publications
0
5
0
Order By: Relevance
“…ST-GCN is a typical spatiotemporal approach since it performs GCN on spatiotemporal graph (STG) directly and therefore extracts spatiotemporal information simultaneously. Methods, such as [ 29 , 48 , 54 , 60 , 68 , 82 , 86 , 96 , 101 , 102 , 103 , 104 , 105 , 106 , 107 , 108 , 109 , 110 , 111 , 112 , 113 ] are all developed based on ST-GCN. Methods based on AGCN also work on STG, such as [ 66 , 73 , 93 , 114 ].…”
Section: A New Taxonomy For Skeleton-gnn-based Harmentioning
confidence: 99%
See 3 more Smart Citations
“…ST-GCN is a typical spatiotemporal approach since it performs GCN on spatiotemporal graph (STG) directly and therefore extracts spatiotemporal information simultaneously. Methods, such as [ 29 , 48 , 54 , 60 , 68 , 82 , 86 , 96 , 101 , 102 , 103 , 104 , 105 , 106 , 107 , 108 , 109 , 110 , 111 , 112 , 113 ] are all developed based on ST-GCN. Methods based on AGCN also work on STG, such as [ 66 , 73 , 93 , 114 ].…”
Section: A New Taxonomy For Skeleton-gnn-based Harmentioning
confidence: 99%
“…Q.B. Zhong et al [ 109 ] emphasized the joints with more motions and propose a novel local posture motion-based attention module (LPM-TAM) to filter out low motion information in temporal domain. This operation helps improve the ability of motion-related feature extraction.…”
Section: The Common Frameworkmentioning
confidence: 99%
See 2 more Smart Citations
“…where p means the similarity coefficient, which is between 0 and 1. e larger the value of p, the higher the similarity between the human motion posture feature and candidate area posture feature [26,27]. When p reaches the maximum value, the candidate area of the human motion posture will become the human motion posture feature to be solved in this frame of image ρ.…”
Section: Extracting Of Human Motion Posture Featuresmentioning
confidence: 99%