Proceedings of the 26th Annual International Conference on Machine Learning 2009
DOI: 10.1145/1553374.1553419
|View full text |Cite
|
Sign up to set email alerts
|

PAC-Bayesian learning of linear classifiers

Abstract: We present a general PAC-Bayes theorem from which all known PAC-Bayes risk bounds are obtained as particular cases. We also propose different learning algorithms for finding linear classifiers that minimize these bounds. These learning algorithms are generally competitive with both AdaBoost and the SVM.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

2
198
0
1

Year Published

2010
2010
2021
2021

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 132 publications
(207 citation statements)
references
References 6 publications
2
198
0
1
Order By: Relevance
“…Many other ways to choose h t could be studied, namely a combination of a subset of weak classifiers, or the choice between the weak classifier of the major view and a combination of the classifiers on minor views, etc. Hence, many alternate selections deserve to be studied, both theoretically (for example, in the PAC-Bayes framework [23]) and empirically.…”
Section: Discussion and Improvementsmentioning
confidence: 99%
“…Many other ways to choose h t could be studied, namely a combination of a subset of weak classifiers, or the choice between the weak classifier of the major view and a combination of the classifiers on minor views, etc. Hence, many alternate selections deserve to be studied, both theoretically (for example, in the PAC-Bayes framework [23]) and empirically.…”
Section: Discussion and Improvementsmentioning
confidence: 99%
“…PAC-Bayes (McAllester 1999;Seeger 2002;McAllester 2003;Langford 2006;Lacasse et al 2006;Germain et al 2009;Seldin et al 2012;Tolstikhin and Seldin 2013) is a theory for bounding the generalization error of classifiers. A variety of PAC-Bayes generalization bounds (McAllester 1999;Seeger 2002;McAllester 2003;Langford 2006;Lacasse et al 2006;Germain et al 2009;Seldin et al 2012;Tolstikhin and Seldin 2013) have been proposed for different classifiers such as deterministic classifiers, Gibbs classifiers (McAllester 1999), linear classifiers or nonlinear classifiers (e.g.…”
Section: Pac-bayes Generalization Boundsmentioning
confidence: 99%
“…PAC-Bayes theory (McAllester 1999;Seeger 2002;McAllester 2003;Langford 2006;Lacasse et al 2006;Germain et al 2009;Seldin et al 2012;Tolstikhin and Seldin 2013) potentially can provide a framework to learn feature mappings and classifiers jointly, allowing the fine tuning of feature mapping. PAC-Bayes is a theory proposed to bound the generalization error of classifiers, where classifiers are learned by minimizing the generalization bound with respect to the parameters of the classifiers over the training set.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Germain et al [7] recently show a simplified PAC-Bayes generalization proof technique for linear classifiers in a more general setting.…”
Section: Generalization Errormentioning
confidence: 99%