2022 IEEE International Conference on Multimedia and Expo (ICME) 2022
DOI: 10.1109/icme52920.2022.9859804
|View full text |Cite
|
Sign up to set email alerts
|

Few-Shot Unsupervised Domain Adaptation via Meta Learning

Abstract: Few-shot unsupervised domain adaptation (FS-UDA) utilizes few-shot labeled source domain data to realize effective classification in unlabeled target domain. However, current FS-UDA methods are still suffer from two issues: 1) the data from different domains can not be effectively aligned by few-shot labeled data due to the large domain gaps, 2) it is unstable and time-consuming to generalize to new FS-UDA tasks. To address this issue, we put forward a novel Efficient Meta Prompt Learning Framework for FS-UDA.… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(4 citation statements)
references
References 34 publications
0
1
0
Order By: Relevance
“…Additionally, they assume the input-output relationship (i.e., p(y|x)) to be the same across domains. To solve these problems, some methods [155,156,157,153] Effective domain generalization via meta-learning: Domain generalization enables models to perform well on new and unseen domains without requiring access to their data, as illustrated in Figure 8. This is particularly useful in scenarios where access to data is restricted due to real-time deployment requirements or privacy policies.…”
Section: Meta-learning and Domain Adaptation/generalizationmentioning
confidence: 99%
“…Additionally, they assume the input-output relationship (i.e., p(y|x)) to be the same across domains. To solve these problems, some methods [155,156,157,153] Effective domain generalization via meta-learning: Domain generalization enables models to perform well on new and unseen domains without requiring access to their data, as illustrated in Figure 8. This is particularly useful in scenarios where access to data is restricted due to real-time deployment requirements or privacy policies.…”
Section: Meta-learning and Domain Adaptation/generalizationmentioning
confidence: 99%
“…Optimization-based methods (Bertinetto et al 2019) (Finn, Abbeel, and Levine 2017) (Ravi and Larochelle 2017) usually train a meta learner over auxiliary dataset to learn a general initialization model, which can fine-tune and adapt to new tasks very soon. The main purpose of metricbased methods (Li et al 2019) (Snell, Swersky, and Zemel 2017) (Vinyals et al 2016) (Ye et al 2020) is that learn a generalizable feature embedding for metric learning, which can immediately adapt to new tasks without any fine-tune and retraining. Typically, ProtoNet (Snell, Swersky, and Zemel 2017) learns the class prototypes in the support set and classifies the query images based on the maximum similarity to these prototypes.…”
Section: Related Workmentioning
confidence: 99%
“…Currently, a setting namely few-shot unsupervised domain adaptation (FS-UDA) (Huang et al 2021) (Yang et al 2022), which utilizes few labeled data in source domain to train a model to classify unlabeled data in target domain, owns its potential feasibility. Typically, a FS-UDA model could learn general knowledge from base classes during training to guide classification in novel classes during testing.…”
Section: Introductionmentioning
confidence: 99%
“…As a result, transfer learning is increasingly recognized as a vital approach for tackling practical challenges. Within the realm of transfer learning, two specialized techniques are zero-shot learning [29]- [32] and few-shot learning [33], [34]. These approaches address scenarios where there is limited or no labeled data available in the target domain.…”
Section: Introductionmentioning
confidence: 99%