2021 IEEE/CVF International Conference on Computer Vision (ICCV) 2021
DOI: 10.1109/iccv48922.2021.01153
|View full text |Cite
|
Sign up to set email alerts
|

Online Knowledge Distillation for Efficient Pose Estimation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
36
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 77 publications
(36 citation statements)
references
References 43 publications
0
36
0
Order By: Relevance
“…We think the generalization ability of the model is more important for the dehazing tasks, and a little distortion is tolerable. ); Variant B, the proposed method without the FAB (we replace it to feature aggregation unit designed in paper 30 ); Variant C, the proposed method without the multiscale feature shared network (we replace it to a single scale network consisted of AGRDBs ); Variant D, the proposed method without the knowledge distillation of the model-based student branch; Variation E, the proposed method without the knowledge distillation of the model-free student branch; Variation F, the proposed method. We train these variants on the ITS datasets for 30 epochs and test the trained variants on the SOTS outdoor datasets, I-HAZE, and O-HAZE, and conduct quantitative comparisons to evaluate the performance of each variant.…”
Section: Comparisons With State-of-the-art Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…We think the generalization ability of the model is more important for the dehazing tasks, and a little distortion is tolerable. ); Variant B, the proposed method without the FAB (we replace it to feature aggregation unit designed in paper 30 ); Variant C, the proposed method without the multiscale feature shared network (we replace it to a single scale network consisted of AGRDBs ); Variant D, the proposed method without the knowledge distillation of the model-based student branch; Variation E, the proposed method without the knowledge distillation of the model-free student branch; Variation F, the proposed method. We train these variants on the ITS datasets for 30 epochs and test the trained variants on the SOTS outdoor datasets, I-HAZE, and O-HAZE, and conduct quantitative comparisons to evaluate the performance of each variant.…”
Section: Comparisons With State-of-the-art Methodsmentioning
confidence: 99%
“…Zhang et al 29 proposed a self-distillation strategy, which constructs a deep CNN and distills the features of deep convolutions to the shallower convolutions. Moreover, Li et al 30 proposed a novel online knowledge distillation method, which does not rely on pretrained teachers and improves the accuracy of pose estimation. Inspired by it, we build an online knowledge distillation network for single image dehazing named OKDNet.…”
Section: Knowledge Distillationmentioning
confidence: 99%
“…Designing efficient human pose estimators has been intensively studied for practical usage. For extracting 2D pose from images, state-of-the-art methods [22,26,36,50,53] have achieved real-time inference speed. In terms of multiview 3D pose estimation, Bultman et al .…”
Section: Efficient Human Pose Estimationmentioning
confidence: 99%
“…More recently, the deep learning techniques [81,90,103,146,147] enable learning feature representations automatically from data, which has significantly contributed to the advancement of human pose estimation. These deep learningbased approaches [9,86,91,99,101,153,161,181], commonly building upon the success of convolutional neural networks, have achieved outstanding performance on this task. Given the rapid development, this paper seeks to track recent progress and summarize their accomplishments to deliver a clearer panorama for 2D human pose estimation.…”
Section: Introductionmentioning
confidence: 99%