2012
DOI: 10.1109/tkde.2012.75
|View full text |Cite
|
Sign up to set email alerts
|

Knowledge Transfer with Low-Quality Data: A Feature Extraction Issue

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
28
0

Year Published

2013
2013
2020
2020

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 54 publications
(28 citation statements)
references
References 23 publications
0
28
0
Order By: Relevance
“…The first category would modify the parameters of a source learning model to improve its accuracy in a target domain [30,31]. The second one would reduce the difference between the source and target distributions to adapt the classifier to the target domain [32,33]. The last one would automatically select the training samples that could give a better model for the target task [34,35].…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…The first category would modify the parameters of a source learning model to improve its accuracy in a target domain [30,31]. The second one would reduce the difference between the source and target distributions to adapt the classifier to the target domain [32,33]. The last one would automatically select the training samples that could give a better model for the target task [34,35].…”
Section: Related Workmentioning
confidence: 99%
“…Futhermore, some solutions concatenated the source dataset with new samples, which increased the dataset size during iterations [30][31][32][33]. Others were limited only to the use of samples extracted from the target domain [28], which resulted in losing pertinent information of source samples.…”
Section: Related Workmentioning
confidence: 99%
“…To achieve this goal, Pan et al [7] proposed a Transfer Component Analysis (TCA) method to minimize the reconstruction error of the input data by reducing the discrepancy between the labeled and unlabeled data. Quanz et al [19] have explored knowledge transfer of sparse feature, which is a more restricted procedure and prone to overfitting. In addition, Wang et al [20] extends NMF to cross-domain senario.…”
Section: Related Workmentioning
confidence: 99%
“…Qaunz et al, 2012, [16] have explored a feature extraction perspective, starting with the popular sparse coding approach which learns a set of higher order features for the data. They presented a novel method, where they did not use any classifier.…”
Section: Review On Feature Selection and Reductionmentioning
confidence: 99%