2023
DOI: 10.48550/arxiv.2301.13538
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

AMD: Adaptive Masked Distillation for Object

Abstract: As a general model compression paradigm, featurebased knowledge distillation allows the student model to learn expressive features from the teacher counterpart. In this paper, we mainly focus on designing an effective feature-distillation framework and propose a spatial-channel adaptive masked distillation (AMD) network for object detection. More specifically, in order to accurately reconstruct important feature regions, we first perform attention-guided feature masking on the feature map of the student networ… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 22 publications
0
1
0
Order By: Relevance
“…As a feature-based knowledge distillation paradigm, the adaptive mask extraction network proposed in Ref. 13 can reconstruct and learn more critical object-aware features, which is helpful for the accuracy of small-object detection. By contrast, our approach seeks to enhance the performance of small-object detection by including frequency domain branches to compensate for the lack of information in the spatial domain.…”
Section: Small-object Detectionmentioning
confidence: 99%
“…As a feature-based knowledge distillation paradigm, the adaptive mask extraction network proposed in Ref. 13 can reconstruct and learn more critical object-aware features, which is helpful for the accuracy of small-object detection. By contrast, our approach seeks to enhance the performance of small-object detection by including frequency domain branches to compensate for the lack of information in the spatial domain.…”
Section: Small-object Detectionmentioning
confidence: 99%