2022
DOI: 10.1109/lsp.2022.3199670
|View full text |Cite
|
Sign up to set email alerts
|

Spatial Focus Attention for Fine-Grained Skeleton-Based Action Tasks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
4
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 9 publications
(4 citation statements)
references
References 32 publications
0
4
0
Order By: Relevance
“…In order to solve this problem, the third approach uses GCNs (Yan et al, 2018;Shi et al, 2019b;Ye et al, 2020;Zhang et al, 2020a;Shi et al, 2019a;Kong et al, 2022;Gao et al, 2021;Liu et al, 2022;Peng et al, 2021;Song et al, 2020;Wu et al, 2021;Liu et al, 2021) to capture topological graph structure of skeleton. For example, Yan et al (Yan et al, 2018) proposed spatial-temporal graph convolutional networks (ST-GCN) to extract topological spatial-temporal features, where a static graph is used to capture relationship among joints.…”
Section: Deep Learning Based Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…In order to solve this problem, the third approach uses GCNs (Yan et al, 2018;Shi et al, 2019b;Ye et al, 2020;Zhang et al, 2020a;Shi et al, 2019a;Kong et al, 2022;Gao et al, 2021;Liu et al, 2022;Peng et al, 2021;Song et al, 2020;Wu et al, 2021;Liu et al, 2021) to capture topological graph structure of skeleton. For example, Yan et al (Yan et al, 2018) proposed spatial-temporal graph convolutional networks (ST-GCN) to extract topological spatial-temporal features, where a static graph is used to capture relationship among joints.…”
Section: Deep Learning Based Methodsmentioning
confidence: 99%
“…Compared with the conventional RGB video, 3D skeleton owning high-level representation is light-weight and robust to both view differences and complicated background. Therefore, 3D skeleton based action recognition has been widely investigated with methods based on handcrafted features (Weng et al, 2017;Xia et al, 2012), Convolutional Neural Networks (CNNs) (Ke et al, 2017a,b;Li et al, 2017;Hou et al, 2018;Li et al, 2019a;Xu et al, 2018;, Recurrent Neural Networks (RNNs) (Li et al, 2018(Li et al, , 2019bLiu et al, 2018;Song et al, 2017) and Graph Convolutional Networks (GCNs) (Yan et al, 2018;Shi et al, 2019b;Ye et al, 2020;Zhang et al, 2020a;Shi et al, 2019a;Kong et al, 2022;Gao et al, 2021;Liu et al, 2022;Peng et al, 2021). However, these methods are developed in a fully supervised manner and require extensive annotated labels, which is expensive and time-consuming.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…For example, obscured hands or legs, this type of motion information is easily lost. For the skeleton-based approaches [10][11][12][13][14][15][16], lightweight skeleton data can make the model computationally less costly, but it is easy to misunderstand similar motion trajectory of the action without image visual information.…”
Section: Introductionmentioning
confidence: 99%