2023
DOI: 10.3390/app13074205
|View full text |Cite
|
Sign up to set email alerts
|

STJA-GCN: A Multi-Branch Spatial–Temporal Joint Attention Graph Convolutional Network for Abnormal Gait Recognition

Abstract: Early recognition of abnormal gait enables physicians to determine a prompt rehabilitation plan for patients for the most effective treatment and care. The Kinect depth sensor can easily collect skeleton data describing the position of joints in the human body. However, the default human skeleton model of Kinect includes an excessive number of many joints, which limits the accuracy of the gait recognition methods and increases the computational resources required. In this study, we propose an optimized human s… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
1
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(2 citation statements)
references
References 27 publications
0
2
0
Order By: Relevance
“…Model-based gait recognition concerns identification using an underlying mathematical construct (s) representing the discriminatory gait characteristics (be they static or dynamic), with a set of parameters and a set of logical and quantitative relationships between them [2]. The model-based method [3][4][5][6][7][8][9][10] can be divided into two steps. The first step is to mathematically model the human body structure and movement.…”
Section: Model-basedmentioning
confidence: 99%
See 1 more Smart Citation
“…Model-based gait recognition concerns identification using an underlying mathematical construct (s) representing the discriminatory gait characteristics (be they static or dynamic), with a set of parameters and a set of logical and quantitative relationships between them [2]. The model-based method [3][4][5][6][7][8][9][10] can be divided into two steps. The first step is to mathematically model the human body structure and movement.…”
Section: Model-basedmentioning
confidence: 99%
“…The format of batch size used in this experiment is (P, K), where P refers to the number of objects to be identified, and K refers to the number of sample sequences contained in each object. Specifically, in the experiment of CASIA-B [30], this paper sets batch size to (8,12), the number of iterations of training is 120 k. Due to the increase in the number of objects in OU-MVLP [31] samples, we change batch size to (32,10), the number of iterations of training is 250 k.…”
Section: Parameter Settingsmentioning
confidence: 99%