2024
DOI: 10.1007/s11063-024-11646-5
|View full text |Cite
|
Sign up to set email alerts
|

Knowledge Distillation Based on Narrow-Deep Networks

Yan Zhou,
Zhiqiang Wang,
Jianxun Li

Abstract: Deep neural networks perform better than shallow neural networks, but the former tends to be deeper or wider, introducing large numbers of parameters and computations. We know that networks that are too wide have a high risk of overfitting and networks that are too deep require a large amount of computation. This paper proposed a narrow-deep ResNet, increasing the depth of the network while avoiding other issues caused by making the network too wide, and used the strategy of knowledge distillation, where we se… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Publication Types

Select...

Relationship

0
0

Authors

Journals

citations
Cited by 0 publications
references
References 16 publications
0
0
0
Order By: Relevance

No citations

Set email alert for when this publication receives citations?