The performance of the supervised learning algorithms such as k-nearest neighbor (k-NN) depends on the labeled data. For some applications (Target Domain), obtaining such labeled data is very expensive and labor-intensive. In a real-world scenario, the possibility of some other related application (Source Domain) is always accompanied by sufficiently labeled data. However, there is a distribution discrepancy between the source domain and the target domain application data as the background of collecting both the domains data is different. Therefore, source domain application with sufficient labeled data cannot be directly utilized for training the target domain classifier. Domain Adaptation (DA) or Transfer learning (TL) provides a way to transfer knowledge from source domain application to target domain application. Existing DA methods may not perform well when there is a much discrepancy between the source and the target domain data, and the data is non-linear separable. Therefore, in this paper, we provide a Kernelized Unified Framework for Domain Adaptation (KUFDA) that minimizes the discrepancy between both the domains on linear or non-linear data-sets and aligns them both geometrically and statistically. The substantial experiments verify that the proposed framework outperforms state-of-the-art Domain Adaptation and the primitive methods (Non-Domain Adaptation) on real-world Office-Caltech and PIE Face data-sets. Our proposed approach (KUFDA) achieved mean accuracies of 86.83% and 74.42% for all possible tasks of Office-Caltech with VGG-Net features and PIE Face data-sets.
Transfer Learning is an effective method of dealing with real-world problems where the training and test data are drawn from different distributions. Transfer learning methods use a labeled source domain to boost the task in a target domain that may be unsupervised or semi-supervised. However, the previous transfer learning algorithms use Euclidean distance or Mahalanobis distance formula to represent the relationships between instances and to try and capture the geometry of the manifold. In many real-world scenarios, this is not enough and these functions fail to capture the intrinsic geometry of the manifold that the data exists in. In this paper, we propose a transfer learning framework called Semi-Supervised Metric Transfer Learning with Relative Constraints (SSMTR), that uses distance metric learning with a set of relative distance constraints that capture the similarities and dissimilarities between the source and the target domains better. In SSMTR, instance weights are learned for different domains which are then used to reduce the domain shift while a Relative Distance metric is learned in parallel. We have developed SSMTR for classification problems as well, and have conducted extensive experiments on several real-world datasets; particularly, the PIE Face, Office-Caltech, and USPS-MNIST datasets to verify the accuracy of our proposed algorithm when compared to the current transfer learning algorithms.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.