2023
DOI: 10.1016/j.jvcir.2023.103900
|View full text |Cite
|
Sign up to set email alerts
|

Adaptive multi-teacher softened relational knowledge distillation framework for payload mismatch in image steganalysis

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(1 citation statement)
references
References 19 publications
0
1
0
Order By: Relevance
“…The student model gains the teacher model's knowledge through distillation training, with a potential slight performance loss traded for transferring knowledge from complex teacher models to a simpler student model. Currently, most instances of knowledge distillation focus solely on a single teacher model, overlooking the possibility that a single student model can be supervised by multiple teacher models [20,21]. Alternatively, recognizing the equal significance of multiple teacher models renders it impractical to acquire more valid knowledge based on inherent distinctions in these models.…”
Section: Knowledge Distillationmentioning
confidence: 99%
“…The student model gains the teacher model's knowledge through distillation training, with a potential slight performance loss traded for transferring knowledge from complex teacher models to a simpler student model. Currently, most instances of knowledge distillation focus solely on a single teacher model, overlooking the possibility that a single student model can be supervised by multiple teacher models [20,21]. Alternatively, recognizing the equal significance of multiple teacher models renders it impractical to acquire more valid knowledge based on inherent distinctions in these models.…”
Section: Knowledge Distillationmentioning
confidence: 99%