2023
DOI: 10.3390/electronics12153222
|View full text |Cite
|
Sign up to set email alerts
|

Lightweight Tunnel Defect Detection Algorithm Based on Knowledge Distillation

Abstract: One of the greatest engineering feats in history is the construction of tunnels, and the management of tunnel safety depends heavily on the detection of tunnel defects. However, the real-time, portability, and accuracy issues with the present tunnel defect detection technique still exist. The study improves the traditional defect detection technology based on the knowledge distillation algorithm, the depth pooling residual structure is designed in the teacher network to enhance the ability to extract target fe… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 28 publications
0
2
0
Order By: Relevance
“…Researchers have proposed a range of CNN compression and acceleration techniques, which include knowledge distillation [1,2], neural network architecture search [3,4], pruning [5,6], and quantization [7]. Knowledge distillation uses a large model as a 'teacher' to guide the training of a smaller 'student' model, enabling the smaller model to assimilate the knowledge contained in the larger model.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Researchers have proposed a range of CNN compression and acceleration techniques, which include knowledge distillation [1,2], neural network architecture search [3,4], pruning [5,6], and quantization [7]. Knowledge distillation uses a large model as a 'teacher' to guide the training of a smaller 'student' model, enabling the smaller model to assimilate the knowledge contained in the larger model.…”
Section: Introductionmentioning
confidence: 99%
“…Knowledge distillation uses a large model as a 'teacher' to guide the training of a smaller 'student' model, enabling the smaller model to assimilate the knowledge contained in the larger model. For example, Anfu Zhu [1] achieved an 81.4% reduction in the number of parameters of the student model compared to the teacher model by utilizing multi-dimensional knowledge distillation, increasing accuracy by 2.5% over training the student model directly. Neural architecture search is a technique that employs automated methods to discover optimal neural network structures, effectively balancing network accuracy and efficiency.…”
Section: Introductionmentioning
confidence: 99%