Advances in Large-Margin Classifiers 2000
DOI: 10.7551/mitpress/1113.003.0008
|View full text |Cite
|
Sign up to set email alerts
|

Probabilities for SV Machines

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
80
0
4

Year Published

2011
2011
2024
2024

Publication Types

Select...
5
4
1

Relationship

0
10

Authors

Journals

citations
Cited by 750 publications
(85 citation statements)
references
References 0 publications
1
80
0
4
Order By: Relevance
“…We first selected five baseline machine learning methods to predict microbial population adaptation, including RF with the squared error criterion, extreme gradient boosting (XGBoost) with the squared error loss function, gradient boosting regression tree (GBRT) with the squared error loss function, support vector machine (SVM) with a radial basis function kernel and multilayer perceptron (MLP). The predictive performances were compared using a tenfold cross-validation test (Chen & Guestrin, 2016;Friedman, 2002;He et al, 2015;Platt, 2000). For these classifiers, we utilized the grid search method to select the optimal parameters.…”
Section: Ensemble Machine Learning Model Developmentmentioning
confidence: 99%
“…We first selected five baseline machine learning methods to predict microbial population adaptation, including RF with the squared error criterion, extreme gradient boosting (XGBoost) with the squared error loss function, gradient boosting regression tree (GBRT) with the squared error loss function, support vector machine (SVM) with a radial basis function kernel and multilayer perceptron (MLP). The predictive performances were compared using a tenfold cross-validation test (Chen & Guestrin, 2016;Friedman, 2002;He et al, 2015;Platt, 2000). For these classifiers, we utilized the grid search method to select the optimal parameters.…”
Section: Ensemble Machine Learning Model Developmentmentioning
confidence: 99%
“…For predictions based on SVM models, instead of using a boolean output value indicating 1D or non-1D material based on the location of a feature vector relative to the separating hyperplane, we generate a continuous classification between 0 and 1 using Platt scaling. 27 For random forest models, a probability can likewise be determined by the percentage of individual decision trees (100 used) where the individual tree predicts a composition to be 1D. The relevant threshold of the probability prediction can be chosen not only based on the precision−recall curve for the test set but also on the number of predicted positive 1D materials when the model is used for screening.…”
Section: The Journal Of Physical Chemistrymentioning
confidence: 99%
“…{g} vs {m, s}, {m} vs {s, g}, {s} vs {m, g}) and three binary SVM classifiers for the vowels ({a} vs {i, u}, {i} vs {a, u}, {u} vs {a, i}), aggregating the results 39,40 . The unnormalised scores that this produces can be mapped to interpretable posterior probabilities for consonants and for vowels 41,42 . The input features to the SVMs were the same as for the DNNs (i.e.…”
Section: Support Vector Machinesmentioning
confidence: 99%