This paper investigates a valuable setting called few-shot unsupervised domain adaptation (FS-UDA), which has not been sufficiently studied in the literature. In this setting, the source domain data are labelled, but with few-shot per category, while the target domain data are unlabelled. To address the FS-UDA setting, we develop a general UDA model to solve the following two key issues: the fewshot labeled data per category and the domain adaptation between support and query sets. Our model is general in that once trained it will be able to be applied to various FS-UDA tasks from the same source and target domains. Inspired by the recent local descriptor based few-shot learning (FSL), our general UDA model is fully built upon local descriptors (LDs) for image classification and domain adaptation. By proposing a novel concept called similarity patterns (SPs), our model not only effectively considers the spatial relationship of LDs that was ignored in previous FSL methods, but also makes the learned image similarity better serve the required domain alignment. Specifically, we propose a novel IMage-to-class sparse Similarity Encoding (IMSE) method. It learns SPs to extract the local discriminative information for classification and meanwhile aligns the covariance matrix of the SPs for domain adaptation. Also, domain adversarial training and multi-scale local feature matching are performed upon LDs. Extensive experiments conducted on a multi-domain benchmark dataset DomainNet demonstrates the state-of-the-art performance of our IMSE for the novel setting of FS-UDA. In addition, for FSL, our IMSE can also show better performance than most of recent FSL methods on miniImageNet.
Few-shot unsupervised domain adaptation (FS-UDA) utilizes few-shot labeled source domain data to realize effective classification in unlabeled target domain. However, current FS-UDA methods are still suffer from two issues: 1) the data from different domains can not be effectively aligned by few-shot labeled data due to the large domain gaps, 2) it is unstable and time-consuming to generalize to new FS-UDA tasks. To address this issue, we put forward a novel Efficient Meta Prompt Learning Framework for FS-UDA. Within this framework, we use pre-trained CLIP model as the feature learning base model. First, we design domain-shared prompt learning vectors composed of virtual tokens, which mainly learns the meta knowledge from a large number of meta tasks to mitigate domain gaps. Secondly, we also design a task-shared prompt learning network to adaptively learn specific prompt vectors for each task, which aims to realize fast adaptation and task generalization. Thirdly, we learn a task-specific crossdomain alignment projection and a task-specific classifier with closed-form solutions for each meta task, which can efficiently adapt the model to new tasks in one step. The whole learning process is formulated as a bilevel optimization problem, and a good initialization of model parameters is learned through metalearning. Extensive experimental study demonstrates the promising performance of our framework on benchmark datasets. Our method has the large improvement of at least 15.4% on 5-way 1-shot and 8.7% on 5-way 5-shot, compared with the state-ofthe-art methods. Also, the performance of our method on all the test tasks is more stable than the other methods.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.