2018
DOI: 10.1109/access.2018.2843773
|View full text |Cite
|
Sign up to set email alerts
|

Feature-Based Transfer Learning Based on Distribution Similarity

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
25
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 26 publications
(25 citation statements)
references
References 12 publications
0
25
0
Order By: Relevance
“…This helps in obtaining optimized latent subspaces S and T, as they are similar to each other. In addition, we can create a model using optimized latent subspace S and make predictions with optimized latent subspace T to obtain labeled data of target domain T [46].…”
Section: Optimizing Latent Subspace For Both Source and Target Domain Datamentioning
confidence: 99%
See 1 more Smart Citation
“…This helps in obtaining optimized latent subspaces S and T, as they are similar to each other. In addition, we can create a model using optimized latent subspace S and make predictions with optimized latent subspace T to obtain labeled data of target domain T [46].…”
Section: Optimizing Latent Subspace For Both Source and Target Domain Datamentioning
confidence: 99%
“…For inter-domain optimization, two hyperparameters must be set: similarity parameter β [46] and the size of new feature space k. Therefore, we decided on the sensitivity of parameters through an experiment. First, we analyzed the sensitivity to similarity parameter β.…”
Section: Parameter Sensitivitymentioning
confidence: 99%
“…TL as an advanced variant of ML has attained great success in various fields, e.g., speech recognition [8,9], text mining [10], computer vision [11,12], and ubiquitous computing [13,14] over the last two decades. The existing TL approaches are categorized into following three main groups: (1) instance-based [15], (2) modelbased [16,17] and (3) feature-based [18,19] approaches.…”
Section: Related Workmentioning
confidence: 99%
“…Auxiliary source domain often has different distribution from target domain that makes traditional machine learning algorithms useless, since they assume that the source and target data are under the same distributions. In such a circumstance, transfer learning methods are used to reduce the distribution divergence where they can be divided into following three categories: (a) instance-based methods [ 47 ], which reuse the samples from source domain according to re-weighting techniques, (b) feature-based methods [ 46 , 54 ], which learn the subspace with shared features to represent the source and target data under common conditions or perform distribution alignment to minimize the marginal or conditional distribution divergences between domains, and (c) model-based methods [ 31 ], which transfer parameters of source model to improve the performance of target model.…”
Section: Introductionmentioning
confidence: 99%