2009 Ninth IEEE International Conference on Data Mining 2009
DOI: 10.1109/icdm.2009.75
|View full text |Cite
|
Sign up to set email alerts
|

Extending Semi-supervised Learning Methods for Inductive Transfer Learning

Abstract: Abstract-Inductive transfer learning and semi-supervised learning are two different branches of machine learning. The former tries to reuse knowledge in labeled out-of-domain instances while the later attempts to exploit the usefulness of unlabeled in-domain instances. In this paper, we bridge the two branches by pointing out that many semi-supervised learning methods can be extended for inductive transfer learning, if the step of labeling an unlabeled instance is replaced by re-weighting a diff-distribution i… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
47
0

Year Published

2011
2011
2023
2023

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 31 publications
(47 citation statements)
references
References 9 publications
0
47
0
Order By: Relevance
“…The splice data set has 3190 examples with 60 attributes in each example and one binary class label. We adopt the same strategy in [20,25] to divide the data sets. The mushroom data set is split into the source task and the target task based on the attribute stalk-shape, the source task contains examples whose stalk-shape are tapering and the examples in the target task have stalkshape of enlarging.…”
Section: Experiments 51 Experimental Settingsmentioning
confidence: 99%
See 2 more Smart Citations
“…The splice data set has 3190 examples with 60 attributes in each example and one binary class label. We adopt the same strategy in [20,25] to divide the data sets. The mushroom data set is split into the source task and the target task based on the attribute stalk-shape, the source task contains examples whose stalk-shape are tapering and the examples in the target task have stalkshape of enlarging.…”
Section: Experiments 51 Experimental Settingsmentioning
confidence: 99%
“…The four real data sets used in the experiments are monk, mushroom, splice and krvskp (http://archive.ics.uci.edu/ml/). Although these data sets are for single-task learning, we adopted a preprocessing method [20,25] on them to fit the transfer learning scenario except the monk data sets. We exclude more complex text datasets because our rulebased classifier is not appropriate for them.…”
Section: Experiments 51 Experimental Settingsmentioning
confidence: 99%
See 1 more Smart Citation
“…First, we measure the distance between instances by Transfer Learning [6]. The goal of Transfer Learning is to move the knowledge learned from one environment to deal with the problem in a new environment.…”
Section: B the Weight Of Training Datamentioning
confidence: 99%
“…Therefore, this paper firstly gives the corresponding initial weights to the cross-project training instances through the Transfer Learning [6]. Then give the final weight combines with number of faults information and calculate the prior probabilities of the presence or absence of defects.…”
Section: Introductionmentioning
confidence: 99%