2015 International Joint Conference on Neural Networks (IJCNN) 2015
DOI: 10.1109/ijcnn.2015.7280412
|View full text |Cite
|
Sign up to set email alerts
|

Shrinkage learning to improve SVM with hints

Abstract: The Support Vector Machine (SVM) is one of the most effective and used algorithms, when targeting classification. Despite its large success, SVM is mainly afflicted by two issues: (i) some hyperparameters must be tuned in advance and are, in practice, identified through computationally intensive procedures; (ii) possible a-priori knowledge about the problem (e.g. doctor expertise in medical applications) cannot be straightforwardly exploited. In this paper, we introduce a new approach, able to cope with the tw… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
7
0

Year Published

2019
2019
2020
2020

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(7 citation statements)
references
References 29 publications
0
7
0
Order By: Relevance
“…Transfer is commonly performed from a set of previous models or hypotheses, rather than from a single source. A number of theoretical (Kuzborskij & Orabona, 2013;Kuzborskij, 2018), experimental (Yang et al, 2007;Aytar & Zisserman, 2011;Tommasi et al, 2014;Kuzborskij et al, 2015;Oneto et al, 2015;Mozafari & Jamzad, 2016;Wang & Hebert, 2016), and application-specific methods (Valerio, Passarella, & Conti, 2016) have been proposed. Transferring from previous hypotheses rather than from previous data as in classic transfer learning provides a potential for long-term learning systems, since these hypotheses are the only knowledge that needs to be stored to aid learning of future tasks.…”
Section: Previous Research On Supervised Lifelong Learningmentioning
confidence: 99%
See 4 more Smart Citations
“…Transfer is commonly performed from a set of previous models or hypotheses, rather than from a single source. A number of theoretical (Kuzborskij & Orabona, 2013;Kuzborskij, 2018), experimental (Yang et al, 2007;Aytar & Zisserman, 2011;Tommasi et al, 2014;Kuzborskij et al, 2015;Oneto et al, 2015;Mozafari & Jamzad, 2016;Wang & Hebert, 2016), and application-specific methods (Valerio, Passarella, & Conti, 2016) have been proposed. Transferring from previous hypotheses rather than from previous data as in classic transfer learning provides a potential for long-term learning systems, since these hypotheses are the only knowledge that needs to be stored to aid learning of future tasks.…”
Section: Previous Research On Supervised Lifelong Learningmentioning
confidence: 99%
“…Hypothesis transfer learning is an alternative for transferring knowledge from previous models or hypotheses learned during previous tasks. Most research in hypothesis transfer learning with SVM has proposed to transfer knowledge from source hypotheses as a whole, without distinguishing fragments of knowledge that can be potentially more useful for a target task (Yang et al, 2007;Aytar & Zisserman, 2011;Tommasi et al, 2014;Kuzborskij et al, 2015;Oneto et al, 2015;Mozafari & Jamzad, 2016;Wang & Hebert, 2016). Furthermore, most existing solutions require the target hypothesis to be represented in terms of both the learned set of parameters and the existing source hypotheses.…”
Section: Previous Research On Transferring Forward From Previous Hypo...mentioning
confidence: 99%
See 3 more Smart Citations