2008
DOI: 10.1093/biomet/asm077
|View full text |Cite
|
Sign up to set email alerts
|

Probability estimation for large-margin classifiers

Abstract: SummaryLarge margin classifiers have proven to be effective in delivering high predictive accuracy, particularly those focusing on the decision boundaries and bypassing the requirement of estimating the class probability given input for discrimination. As a result, these classifiers may not directly yield an estimated class probability, which is of interest itself. To overcome this difficulty, this article proposes a novel method to estimate the class probability through sequential classifications, by utilisin… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
120
0

Year Published

2010
2010
2024
2024

Publication Types

Select...
7
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 67 publications
(121 citation statements)
references
References 26 publications
1
120
0
Order By: Relevance
“…There is a direct analogy between hard- and soft-margin classifiers wherein hard-margin classifiers directly estimate the decision boundary and soft-margin classifiers back-out the decision boundary through conditional class probabilities. When the class probabilities are complex, hard-margin classifiers may lead to improved performance (Wang et al, 2008); likewise, when the conditional expectations are complex, directly targeting the decision rule is likely to yield improved performance. Since the proposed methods do not model the relationship between outcomes and DTRs, they may be more robust to model misspecification than statistical modeling alternatives such as Q -learning (Zhang et al, 2012b,a).…”
Section: Discussionmentioning
confidence: 99%
“…There is a direct analogy between hard- and soft-margin classifiers wherein hard-margin classifiers directly estimate the decision boundary and soft-margin classifiers back-out the decision boundary through conditional class probabilities. When the class probabilities are complex, hard-margin classifiers may lead to improved performance (Wang et al, 2008); likewise, when the conditional expectations are complex, directly targeting the decision rule is likely to yield improved performance. Since the proposed methods do not model the relationship between outcomes and DTRs, they may be more robust to model misspecification than statistical modeling alternatives such as Q -learning (Zhang et al, 2012b,a).…”
Section: Discussionmentioning
confidence: 99%
“…Thus motivated, Lin et al (2004) proposed weighted SVM (WSVM) by weighting observations from different classes with different weights in the training process and established its Fisher consistency. Based on the WSVM's Fisher consistency, Wang et al (2008) proposed a probability estimation scheme to estimate the conditional probability for each new observation belonging to each class. Particularly inspired from this probability estimation scheme using the WSVM to address the aforementioned homogeneity issue in partitioning a binary response, Shin et al (2014) proposed a probability-enhanced dimension reduction method for multivariate data.…”
Section: Support Vector Machinementioning
confidence: 99%
“…By using different weights for observations in different classes, Wang, Shen, and Liu (2008) proposed to solve the weighted SVM where 0 < 7r < 1 and f(-) can be either linear or nonlinear with an appropriately chosen penalty J(1). For the weighted SVM (6.2), its classification boundary is shown to consistently estimate the boundary {x : p(x) = 7r}.…”
Section: Probability Estimation For Binary Hard Classifiersmentioning
confidence: 99%
“…But the information is limited in the sense that it is only capable of telling whether the conditional probability is larger than 7r. Wang, Shen, and Liu (2008) proposed to solve weighted SVMs for different weights 7r E (0,1). Then an interval estimate for the conditional class probability is obtained.…”
Section: Probability Estimation For Binary Hard Classifiersmentioning
confidence: 99%