2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2020
DOI: 10.1109/cvpr42600.2020.01250
|View full text |Cite
|
Sign up to set email alerts
|

Inter-Region Affinity Distillation for Road Marking Segmentation

Abstract: We study the problem of distilling knowledge from a large deep teacher network to a much smaller student network for the task of road marking segmentation. In this work, we explore a novel knowledge distillation (KD) approach that can transfer 'knowledge' on scene structure more effectively from a teacher to a student model. Our method is known as Inter-Region Affinity KD (IntRA-KD). It decomposes a given road scene image into different regions and represents each region as a node in a graph. An inter-region a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
75
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 120 publications
(75 citation statements)
references
References 29 publications
0
75
0
Order By: Relevance
“…Compared with SIM-CycleGAN [ 39 ], although it is specifically designed for different scenarios, FANet is also close to it in many metrics, or even better. Compared with the knowledge distillation method IntRA-KD [ 14 ] and the network search method CurveLanes-NAS [ 40 ], FANet has a higher F1 score of 3.09% and 4.09% F1 respectively. Compared with UFLD, although it is faster than ours, FANet outperforms it with a 7.09% F1 score.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Compared with SIM-CycleGAN [ 39 ], although it is specifically designed for different scenarios, FANet is also close to it in many metrics, or even better. Compared with the knowledge distillation method IntRA-KD [ 14 ] and the network search method CurveLanes-NAS [ 40 ], FANet has a higher F1 score of 3.09% and 4.09% F1 respectively. Compared with UFLD, although it is faster than ours, FANet outperforms it with a 7.09% F1 score.…”
Section: Methodsmentioning
confidence: 99%
“…However, the lanes are represented as segmented binary features in segmentation methods, which makes it difficult to aggregate the overall information of lanes. Although some works [ 10 , 12 , 14 ] utilize specially-designedspatial feature aggregation modules to effectively enhance the long-distance perception ability. However, they also increase the computational complexity and make the running speed slower.…”
Section: Introductionmentioning
confidence: 99%
“…To the best of our knowledge, there is no one-fits-all objective function to train neural networks for different purposes, and the context of application is of the key consideration on which a suitable choice is made. In this case, for different steps or constraints imposed for the same aim, it is very common to find in recent literature [47], [65]- [67] that more than one kind of objective function is adopted throughout the whole procedure. Because objective functions are among the most important factors affecting the performance of deep learning algorithms, more comprehensive reasons for the choice and more detailed experimental validity are expected for future research in the lane marking detection community.…”
Section: B Objective Functions For Deep Unsupervised / Semisupervised Learning Modelsmentioning
confidence: 99%
“…Regarding the impact of novel convolution module on the detection performance, SpinNet [68] is better than SCNN [38], because it incorporates more directional features. Compared with self-attention distillation [43], the teacher network has a great contribution to improving the performance of the student network, which makes [47] achieve better detection performance. [49] attains better performance via multi-task and multi-resolution prediction.…”
Section: Performance Comparisonmentioning
confidence: 99%
“…Furthermore, knowledge from intermediate feature maps was distilled for network minimization [ 42 ] and performance improvement [ 43 , 44 ]. Knowledge distillation has been employed in various applications such as object detection [ 45 ], semantic segmentation [ 46 ], domain adaptation [ 47 ], and defense for adversarial examples [ 48 ]. Recently, the teacher–student learning framework has been applied with other advanced learning methodologies such as adversarial learning [ 49 ] and semi-supervised learning [ 50 ].…”
Section: Related Workmentioning
confidence: 99%