In recent years, the Fisher linear discriminant analysis (FLDA) based classification models are among the most successful approaches and have shown effective performance in different classification tasks. However, when the learning data (source domain) have a different distribution compared with the testing data (target domain), the FLDA-based models may not work well, and the performance degrades, dramatically. To face this issue, we offer an optimal domain adaptation via Bregman divergence minimization (DAB) approach, in which the discriminative features of source and target domains are simultaneously learned via domain invariant representation. DAB is designed based on the constraints of FLDA, with the aim of the coupled marginal and conditional distribution shifts adaptation through Bregman divergence minimization. Thus, the resulting representation can show well functionality like FLDA and simultaneously discriminate across various classes, as well. Moreover, our proposed approach can be easily kernelized to deal with nonlinear tasks. Different experiments on various benchmark datasets demonstrate that our DAB can constructively face with the cross domain divergence and outperforms other novel state-of-the-art domain adaptation approaches in crossdistribution domains.
Domain adaptation in machine learning and image processing aims to benefit from gained knowledge of the multiple labeled training sets (i.e. source domain) to classify the unseen test set (i.e. target domain). Therefore, the major issue emerges from dataset bias where the source and target domains have different distributions. In this paper, we introduce a novel unsupervised domain adaptation method for cross-domain visual classification. We suggest a unified framework that reduces both statistically and geometrically shift across domains, referred to as Transferred Local Fisher Discriminant Analysis (TLFDA). Specifically, TLFDA projects data into a shared subspace to minimize the distribution shift between domains and simultaneously preserves the discrimination across different classes. TLFDA maximizes the between-class separability and preserves the within-class local structure in form of an objective function metrics. The objective function is solved effectively in closed-form. Broad experiments demonstrate that TLFDA significantly outperforms many state-of-the-art domain adaptation methods on different cross-domain visual classification tasks.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.