2016
DOI: 10.1186/s40537-016-0043-6
|View full text |Cite
|
Sign up to set email alerts
|

A survey of transfer learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

5
2,332
0
18

Year Published

2017
2017
2024
2024

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 4,565 publications
(2,355 citation statements)
references
References 94 publications
5
2,332
0
18
Order By: Relevance
“…Several machine learning problems fall in this category, probably the most popular are concept drift [Gama et al 2014], domain adaptation [Daume III and Marcu 2006] and transfer learning [Pan and Yang 2010;Weiss et al 2016]. All these problems share that the distribution used for training our model is different than the data the model will face later.…”
Section: Changes In Data Distributionmentioning
confidence: 99%
“…Several machine learning problems fall in this category, probably the most popular are concept drift [Gama et al 2014], domain adaptation [Daume III and Marcu 2006] and transfer learning [Pan and Yang 2010;Weiss et al 2016]. All these problems share that the distribution used for training our model is different than the data the model will face later.…”
Section: Changes In Data Distributionmentioning
confidence: 99%
“…, OSELM requires a number of instances not less than the hidden neuron number before online training in 1 Θ to form an initialization set, which would greatly affect OSELM's learning capability. Since training instances in 1 Θ are not enough to form a proper initialization set, we turn to Ω for exploiting extra information.…”
Section: The Proposed Otelmmentioning
confidence: 99%
“…If we can pick out these instances from the initialization set, the performance would be improved. Based on this idea, we propose OTELM which consists of three steps: Firstly, with the guidance of 1 N labeled instances in 1 Θ , a SSELM is performed on instances from 1 Θ Ω, which assume instances in Ω are unlabeled and would later be re-labeled after the model is trained. Secondly, pick out instances in Ω whose labels predicted by SSELM are not equal to their real labels.…”
Section: The Proposed Otelmmentioning
confidence: 99%
See 1 more Smart Citation
“…Transfer Learning has been employed in a number of domains containing multiple sources to allow data inference to unseen sources. For a more in-depth discussion of the wider field, [6] provides a recent, thorough survey. More specifically, BCI literature typically reports domain adaptation approaches [5], the most popular of which being Common Spatial Patterns [7].…”
Section: Related Work On Transfer Learning In Bcimentioning
confidence: 99%