The 2010 International Joint Conference on Neural Networks (IJCNN) 2010
DOI: 10.1109/ijcnn.2010.5596450
|View full text |Cite
|
Sign up to set email alerts
|

Model selection for support vector machines: Advantages and disadvantages of the Machine Learning Theory

Abstract: Ahstract-A common belief is that Machine Learning Theory (MLT) is not very useful, in pratice, for performing effective SVM model selection. This fact is supported by experience, because well-known hold-out methods like cross-validation, leave-one-out, and the bootstrap usually achieve better results than the ones derived from MLT. We show in this paper that, in a small sample setting, i.e. when the dimensionality of the data is larger than the number of samples, a careful application of the MLT can outperform… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
50
0

Year Published

2015
2015
2023
2023

Publication Types

Select...
5
3
1

Relationship

2
7

Authors

Journals

citations
Cited by 94 publications
(50 citation statements)
references
References 22 publications
0
50
0
Order By: Relevance
“…AHP is used to determine weight criteria and subcriteria as it assists in process computation and to determine the main criteria [15]. Classification using SVM because SVM has advantages like the curse of dimensionality to solve the problem for have few data like in this case [16]. TOPSIS is used to determine recommendations as it is the most simple method in maximizing the distance from negative ideals and in minimizing the distance for positive ideals [17].…”
Section: Ahp-svm-topsismentioning
confidence: 99%
“…AHP is used to determine weight criteria and subcriteria as it assists in process computation and to determine the main criteria [15]. Classification using SVM because SVM has advantages like the curse of dimensionality to solve the problem for have few data like in this case [16]. TOPSIS is used to determine recommendations as it is the most simple method in maximizing the distance from negative ideals and in minimizing the distance for positive ideals [17].…”
Section: Ahp-svm-topsismentioning
confidence: 99%
“…C low = C 14: end if 15: end while 16: return w * (C) A third approach is based on the ideas of Martein and Schaible (1987) and Anguita et al (2010). It exploits, in turn, conventional Linear (LP) and Quadratic Programming (QP) optimization algorithms, as shown in Algorithm 4.…”
Section: Training a Classifier With I-svmmentioning
confidence: 99%
“…Algorithm 4 Algorithm for solving I-SVM Problem (73) based on the results of Martein and Schaible (1987) and Anguita et al (2010) Require: We iteratively proceed in solving the dual of Problem (98) and updating the value of γ 0 until the termination condition is met:…”
Section: Training a Classifier With I-svmmentioning
confidence: 99%
“…Data Analysis is improving our ability to understand complex phenomena much more rapidly than a priori physical models have done in the past (Anguita et al, 2010;Peng et al, 2010). Real-world systems are usually very complex and influenced by many exogenous factors, which make them very challenging to model, relying solely on the a priori knowledge of the problem (Witten et al, 2016).…”
Section: Introductionmentioning
confidence: 99%