2022
DOI: 10.1049/cvi2.12110
|View full text |Cite
|
Sign up to set email alerts
|

Ghost shuffle lightweight pose network with effective feature representation and learning for human pose estimation

Abstract: Despite their success, existing human pose estimation approaches mostly have complex architectures, high cost, and lack of lightweight modules. To address this problem, this paper proposes a Ghost Shuffle Lightweight Pose Network (GSLPN) with a more lightweight and efficient network architecture than the popular Lightweight Pose Network. First, in order to condense the scale of the network while maintaining its performance, we stack two lightweight modules, depthwise convolution and the Ghost module, to build … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 7 publications
(2 citation statements)
references
References 44 publications
0
2
0
Order By: Relevance
“…In conclusion, over the past decade, the trend in the development of convolutional neural network models has shifted from large networks to lightweight and efficient networks [25]. However, most networks achieve this primarily through methods such as depthwise separable convolution, group convolution, and spatially separable convolutions [26,27], which replace standard convolution and, in turn, alleviate the computational burden on the hardware.…”
Section: Introductionmentioning
confidence: 99%
“…In conclusion, over the past decade, the trend in the development of convolutional neural network models has shifted from large networks to lightweight and efficient networks [25]. However, most networks achieve this primarily through methods such as depthwise separable convolution, group convolution, and spatially separable convolutions [26,27], which replace standard convolution and, in turn, alleviate the computational burden on the hardware.…”
Section: Introductionmentioning
confidence: 99%
“…In order to reduce the number of model parameters, we can cut the depth and width of model directly, but this sacrifices quite a bit of accuracy and therefore we must design the model structure carefully. Some studies [10][11][12] try to change the model based on complex models and it has some effect. In addition, with Vaswani's [13] self-attentive mechanism dominance on various prediction tasks, more and more researchers try to use it on computer vision tasks.…”
Section: Introductionmentioning
confidence: 99%