2005
DOI: 10.1007/11564096_12
|View full text |Cite
|
Sign up to set email alerts
|

Robust Bayesian Linear Classifier Ensembles

Abstract: Abstract. Ensemble classifiers combine the classification results of several classifiers.Simple ensemble methods such as uniform averaging over a set of models usually provide an improvement over selecting the single best model. Usually probabilistic classifiers restrict the set of possible models that can be learnt in order to lower computational complexity costs. In these restricted spaces, where incorrect modelling assumptions are possibly made, uniform averaging sometimes performs even better than bayesian… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

3
48
0

Year Published

2006
2006
2014
2014

Publication Types

Select...
5

Relationship

1
4

Authors

Journals

citations
Cited by 39 publications
(51 citation statements)
references
References 19 publications
3
48
0
Order By: Relevance
“…It has also been shown that a SPODE ensemble can further improve upon the classification accuracy of a single SPODE by decreasing the classification variance [13,18]. The first approach to ensembling SPODEs was AODE [13] which used equal weight combination of all SPODEs whose parent occurred with a user-specified minimum frequency in the training data.…”
Section: Spode and Spode Ensemblementioning
confidence: 99%
See 2 more Smart Citations
“…It has also been shown that a SPODE ensemble can further improve upon the classification accuracy of a single SPODE by decreasing the classification variance [13,18]. The first approach to ensembling SPODEs was AODE [13] which used equal weight combination of all SPODEs whose parent occurred with a user-specified minimum frequency in the training data.…”
Section: Spode and Spode Ensemblementioning
confidence: 99%
“…The first approach to ensembling SPODEs was AODE [13] which used equal weight combination of all SPODEs whose parent occurred with a user-specified minimum frequency in the training data. Subsequent research suggested that frequency is not a useful model selection criterion and that appropriate weighting can substantially improve upon equal weighting, proposing weighting schemes such as MAPLMG [18]. On the other hand, it has also been shown that model selection can be very effective when ensembling SPODEs [22].…”
Section: Spode and Spode Ensemblementioning
confidence: 99%
See 1 more Smart Citation
“…, x n the original AODE excludes ODEs with parent x i where the frequency of the value x i is lower than limit m=30, a widely used minimum on sample size for statistical inference purposes. However, subsequent research [14] shows that this constraint actually increases error and hence the current research uses m=1.…”
Section: Averaged One-dependence Estimators (Aode)mentioning
confidence: 99%
“…Semi-naive Bayesian techniques further improve naive Bayes' accuracy by relaxing its assumption that the attributes are conditionally independent [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16]. One approach to weakening this assumption is to use an x-dependence classifier [7], in which each attribute depends upon the class and at most x other attributes.…”
Section: Introductionmentioning
confidence: 99%