2022
DOI: 10.1007/978-3-031-19781-9_11
|View full text |Cite
|
Sign up to set email alerts
|

Dual-Stream Knowledge-Preserving Hashing for Unsupervised Video Retrieval

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 11 publications
(1 citation statement)
references
References 42 publications
0
1
0
Order By: Relevance
“…General knowledge distillation methods [16,21] use the output of the teacher model to generate soft guidance and force the student model to predict similar results. With the development of largescale pre-trained models, many works utilize pre-trained models to distill their model.…”
Section: Knowledge Distillationmentioning
confidence: 99%
“…General knowledge distillation methods [16,21] use the output of the teacher model to generate soft guidance and force the student model to predict similar results. With the development of largescale pre-trained models, many works utilize pre-trained models to distill their model.…”
Section: Knowledge Distillationmentioning
confidence: 99%