2021
DOI: 10.1016/j.imavis.2021.104096
|View full text |Cite
|
Sign up to set email alerts
|

Knowledge distillation methods for efficient unsupervised adaptation across multiple domains

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
4

Citation Types

0
4
0

Year Published

2021
2021
2025
2025

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 14 publications
(4 citation statements)
references
References 5 publications
0
4
0
Order By: Relevance
“…These methods seek to adapt CNNs trained with annotated source video data to perform well in a target domain by leveraging unlabeled data captured from that domain. To learn a discriminant domain-invariant feature representation from source and target data, STDA methods typically rely on discrepancy-based or adversarial approaches [3,5,6,7,8,9,10,11,12,13].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…These methods seek to adapt CNNs trained with annotated source video data to perform well in a target domain by leveraging unlabeled data captured from that domain. To learn a discriminant domain-invariant feature representation from source and target data, STDA methods typically rely on discrepancy-based or adversarial approaches [3,5,6,7,8,9,10,11,12,13].…”
Section: Introductionmentioning
confidence: 99%
“…Nevertheless, these approaches are either too complex (require one model per target domain), or generalize poorly on distinct target domains, particularly when adapting a smaller common CNN backbone on a growing number of targets. In [6], MTDA is performed by distilling information from targetspecific teachers to a student model (deployed for testing), significantly reducing system complexity.…”
Section: Introductionmentioning
confidence: 99%
“…These methods seek to adapt CNNs trained with annotated source video data to perform well in a target domain by leveraging unlabeled data captured from that domain. To learn a discriminant domaininvariant feature representation from source and target data, STDA methods rely on, e.g., discrepancy-based or adversarial approaches [5,3,6,7,8,9,10,11,12,13].…”
Section: Introductionmentioning
confidence: 99%
“…Nevertheless, these approaches are either too complex it requires one model per target domain, or generalize poorly on distinct target domains, particularly when adapting a smaller common CNN backbone on a growing number of targets. In [6], MTDA is performed by distilling information from targetspecific teachers to a student model (deployed for testing), significantly reducing system complexity.…”
Section: Introductionmentioning
confidence: 99%