Proceedings of the 30th ACM International Conference on Multimedia 2022
DOI: 10.1145/3503161.3548052
|View full text |Cite
|
Sign up to set email alerts
|

TGDM: Target Guided Dynamic Mixup for Cross-Domain Few-Shot Learning

Abstract: Given sufficient training data on the source domain, cross-domain few-shot learning (CD-FSL) aims at recognizing new classes with a small number of labeled examples on the target domain. The key to addressing CD-FSL is to narrow the domain gap and transferring knowledge of a network trained on the source domain to the target domain. To help knowledge transfer, this paper introduces an intermediate domain generated by mixing images in the source and the target domain. Specifically, to generate the optimal inter… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2025
2025

Publication Types

Select...
3
2
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 14 publications
(2 citation statements)
references
References 37 publications
0
2
0
Order By: Relevance
“…Second, it divides the modulation parameters into the domain-specific and the domain-cooperative sets to explore the intra-domain information and inter-domain correlations, respectively. Furthermore, [139] explores a novel target guided dynamic mixup (TGDM) framework to generate the intermediate domain images to help the FSL task learning on the traget domain. In addition, [76] learns the meta-learners by utilizing multiple domains, and the meta-learners are combined in the parameter space to be the Initialized parameters of a network used in the target domain.…”
Section: Hybrid Approachesmentioning
confidence: 99%
“…Second, it divides the modulation parameters into the domain-specific and the domain-cooperative sets to explore the intra-domain information and inter-domain correlations, respectively. Furthermore, [139] explores a novel target guided dynamic mixup (TGDM) framework to generate the intermediate domain images to help the FSL task learning on the traget domain. In addition, [76] learns the meta-learners by utilizing multiple domains, and the meta-learners are combined in the parameter space to be the Initialized parameters of a network used in the target domain.…”
Section: Hybrid Approachesmentioning
confidence: 99%
“…Contrastive learning approaches are also used, specially in combination with feature selection and with a mixup module that uses a few samples of the target data for image diversity [16], and feature disentanglement to reduce domain bias [20]. A similar idea introduces an intermediate domain created by mixing source and target domain images to bridge the domain gap [3,66]. While prior works, including the ones that evaluate on chest X-ray datasets [16,58], operate in a multi-class setup, we go beyond and consider a multi-label setup with an overlapping train-test label space and domain discrepancies between training and testing.…”
Section: Cross-domain Few-shot Learning (Cdfsl)mentioning
confidence: 99%