2020
DOI: 10.1109/tip.2019.2946102
|View full text |Cite
|
Sign up to set email alerts
|

Spatiotemporal Knowledge Distillation for Efficient Estimation of Aerial Video Saliency

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
6
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 29 publications
(6 citation statements)
references
References 57 publications
0
6
0
Order By: Relevance
“…Several knowledge distillation approaches have been recently proposed for video saliency prediction. SKD-DVA [20] proposes spatio-temporal knowledge distillation with two teachers and two students, with each pair focusing on either spatial or temporal transfer. SV2T-SS [41] distills corresponding features of teacher and student (implemented as encoderdecoder networks), based on first-and second-order feature statistics transfer.…”
Section: Knowledge Distillation For Visual Saliency Predictionmentioning
confidence: 99%
“…Several knowledge distillation approaches have been recently proposed for video saliency prediction. SKD-DVA [20] proposes spatio-temporal knowledge distillation with two teachers and two students, with each pair focusing on either spatial or temporal transfer. SV2T-SS [41] distills corresponding features of teacher and student (implemented as encoderdecoder networks), based on first-and second-order feature statistics transfer.…”
Section: Knowledge Distillation For Visual Saliency Predictionmentioning
confidence: 99%
“…Several works have already used KD to obtain lightweight models suitable for UAVs. For example, Li et al [16] have applied this technique for video saliency estimation, while Liu et al [17], Yu [18], Ding et al [19], and Luo et al [20] used it for object detection, object recognition, action recognition, and UAV delivery, respectively. However, to our knowledge, no work has investigated knowledge distillation to produce efficient and accurate models tailored for UAVs in the context of weed mapping.…”
Section: B Knowledge Distillationmentioning
confidence: 99%
“…Guo et al [22] improved the student network to produce more confident predictions with the help of the teacher network for robust student network learning. Li et al [23] proposed a dynamic saliency estimation approach for aerial videos via spatiotemporal knowledge distillation, which could effectively remove the inter-model redundancy. Zhang et al [24] learned to distill future knowledge from a backward neural language model (teacher) to future-aware vectors (student) during the training phase, which were incorporated into the attention layer to provide full-range context information.…”
Section: Related Workmentioning
confidence: 99%