2020
DOI: 10.1016/j.patrec.2020.06.015
|View full text |Cite
|
Sign up to set email alerts
|

Data augmentation method for improving the accuracy of human pose estimation with cropped images

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
12
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 27 publications
(12 citation statements)
references
References 3 publications
0
12
0
Order By: Relevance
“…All the above augmentation operations are generally applicable for many different computer vision tasks. Besides, pose-specific augmentation techniques [40,79,57] have been proposed additionally. Specifically, in the real world, it is often the case that human body joints are obscured by other objects; In crowded scenes, there may be multiple joints of the same class that can greatly confuse a model.…”
Section: Random Data Augmentationmentioning
confidence: 99%
See 1 more Smart Citation
“…All the above augmentation operations are generally applicable for many different computer vision tasks. Besides, pose-specific augmentation techniques [40,79,57] have been proposed additionally. Specifically, in the real world, it is often the case that human body joints are obscured by other objects; In crowded scenes, there may be multiple joints of the same class that can greatly confuse a model.…”
Section: Random Data Augmentationmentioning
confidence: 99%
“…Generic data augmentation [75,60] Pose-specific data augmentation [40,79,57] Optimized data augmentation Neural Architecture Search (NAS) based methods [35] Generative Adversarial Network (GAN) based methods [58] Deep human pose model…”
Section: Random Data Augmentationmentioning
confidence: 99%
“…e original data use 10,000 images as the training set and 3,466 images as the test set. Data augmentation [20,21] can be used to increase the number of training samples to effectively improve the performance of the convolutional neural network. To be consistent with the actual facts, we set the left 15-degree and right 12-degree center rotation (the label coordinates should also be rotated), and the rotation method is given in formula (8), where θ is the rotation angle.…”
Section: Datasetmentioning
confidence: 99%
“…Generally, pose estimation performance under truncation has not been studied extensively in the literature. Recent work by Park et al [44] uses cropping data Fig. 2.…”
Section: Truncated Pose Estimationmentioning
confidence: 99%