Twenty-First International Conference on Machine Learning - ICML '04 2004
DOI: 10.1145/1015330.1015436
|View full text |Cite
|
Sign up to set email alerts
|

Improving SVM accuracy by training on auxiliary data sources

Abstract: The standard model of supervised learning assumes that training and test data are drawn from the same underlying distribution. This paper explores an application in which a second, auxiliary, source of data is available drawn from a different distribution. This auxiliary data is more plentiful, but of significantly lower quality, than the training and test data. In the SVM framework, a training example has two roles: (a) as a data point to constrain the learning process and (b) as a candidate support vector th… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
135
0
1

Year Published

2005
2005
2022
2022

Publication Types

Select...
6
3
1

Relationship

0
10

Authors

Journals

citations
Cited by 229 publications
(136 citation statements)
references
References 8 publications
0
135
0
1
Order By: Relevance
“…This asymmetric case, or transfer learning, requires the assumption of an asymmetric dependency structure between tasks. Existing approaches include reweighting-based methods [16,3,4] or learning of shared feature spaces. An alternative has been to, in effect, use a symmetric multi-task learning method in an asymmetric mode, by using the model learned from auxiliary tasks as a prior for the target task [9,12,17].…”
Section: Symmetric and Asymmetric Multi-task Learningmentioning
confidence: 99%
“…This asymmetric case, or transfer learning, requires the assumption of an asymmetric dependency structure between tasks. Existing approaches include reweighting-based methods [16,3,4] or learning of shared feature spaces. An alternative has been to, in effect, use a symmetric multi-task learning method in an asymmetric mode, by using the model learned from auxiliary tasks as a prior for the target task [9,12,17].…”
Section: Symmetric and Asymmetric Multi-task Learningmentioning
confidence: 99%
“…Existing approaches include reweighting-based methods (Wu and Dietterich 2004;Bickel et al 2008Bickel et al , 2009 or learning of shared feature spaces. An alternative has been to, in effect, use a symmetric multi-task learning method in an asymmetric mode, by using the model learned from auxiliary tasks as a prior for the target task Raina et al 2005;Xue et al 2007).…”
Section: Symmetric and Asymmetric Multi-task Learningmentioning
confidence: 99%
“…In image, text analysis or robotics many methods have been devised for knowledge transfer. Related machine learning subjects include: learning from hints [104], lifelong learning [105], multi-task learning [106], cross-domain learning [107,108], cross-category learning [109] and selftaught learning [110]. EigenTransfer algorithm [111] tries to unify various transfer learning ideas representing the target task by a graph.…”
Section: Transfer Of Knowledgementioning
confidence: 99%