2011
DOI: 10.1007/s10994-011-5263-6
|View full text |Cite
|
Sign up to set email alerts
|

Learning by extrapolation from marginal to full-multivariate probability distributions: decreasingly naive Bayesian classification

Abstract: Averaged n-Dependence Estimators (AnDE) is an approach to probabilistic classification learning that learns by extrapolation from marginal to full-multivariate probability distributions. It utilizes a single parameter that transforms the approach between a lowvariance high-bias learner (Naive Bayes) and a high-variance low-bias learner with Bayes optimal asymptotic error. It extends the underlying strategy of Averaged One-Dependence Estimators (AODE), which relaxes the Naive Bayes independence assumption while… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
67
0

Year Published

2012
2012
2023
2023

Publication Types

Select...
6
3

Relationship

4
5

Authors

Journals

citations
Cited by 77 publications
(68 citation statements)
references
References 38 publications
1
67
0
Order By: Relevance
“…This hypothesis was tested in the context of Bayesian Network classifiers in Webb et al (2011Webb et al ( , 2005 where the results corroborated the hypothesis. However, we are not aware of any past work that investigates this hypothesis in the context of higher-order Logistic Regression.…”
Section: Introductionmentioning
confidence: 71%
See 1 more Smart Citation
“…This hypothesis was tested in the context of Bayesian Network classifiers in Webb et al (2011Webb et al ( , 2005 where the results corroborated the hypothesis. However, we are not aware of any past work that investigates this hypothesis in the context of higher-order Logistic Regression.…”
Section: Introductionmentioning
confidence: 71%
“…It has been shown that Bayesian Network Classifiers (BNCs) that explicitly represent higher-order interactions tend to have lower bias than those that do not (Martinez et al 2016;Webb et al 2011). This is because BNCs that can represent higher-order interactions can exactly represent any of a superset of the distributions that can be represented by BNCs that are restricted to lower order interactions.…”
Section: Introductionmentioning
confidence: 99%
“…Still, we plan to extend the experimental part to a test bed of high dimensional datasets in order to corroborate these conclusions. Moreover, we believe that the positive results observed in AODE are a good motivation to think that the beneficial properties of NDD will be strengthen when applied to Aggregating n-dependence estimators (AnDE) [19], for values of n greater or equal to 2 (since when n = 1 it is equivalent to AODE).…”
Section: Discussionmentioning
confidence: 99%
“…Unfortunately there is no a priori means to preselect an appropriate value of k that can help to achieve the lowest error for a given training set, as this is a complex interplay between the data quantity and the complexity and strength of the interactions between the attributes proved by Martinez et al [8]. From the discussion above, we can see that, for each KDF i , the space complexity of the probability table increases exponentially as k increases; to achieve the trade-off between classification performance and efficiency, we restrict the structure complexity to be two-dependence, which is also adopted by Webb et al [26].…”
Section: For Each Sequencementioning
confidence: 99%