2018
DOI: 10.1016/j.ecolmodel.2017.12.015
|View full text |Cite
|
Sign up to set email alerts
|

Discretizing environmental data for learning Bayesian-network classifiers

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
14
0
1

Year Published

2018
2018
2024
2024

Publication Types

Select...
8
1
1

Relationship

0
10

Authors

Journals

citations
Cited by 39 publications
(15 citation statements)
references
References 42 publications
0
14
0
1
Order By: Relevance
“…State variables in natural systems are most often continuous, and discretization of continuous variables, may result in information loss (Uusitalo, 2007 ). The method employed for discretization may impact on the predictive quality of the model (Ropero, Renooji, & van der Gaag, 2018 ), and careful consideration must be given to when and how to discretize. For data‐driven BNs without a priori distributions, a large number of intervals for a variable can result in zero frequencies and a failure to be adequately representative of the natural system (Uusitalo, 2007 ).…”
Section: Challenges In Bayesian Network Modeling Of South African Wat...mentioning
confidence: 99%
“…State variables in natural systems are most often continuous, and discretization of continuous variables, may result in information loss (Uusitalo, 2007 ). The method employed for discretization may impact on the predictive quality of the model (Ropero, Renooji, & van der Gaag, 2018 ), and careful consideration must be given to when and how to discretize. For data‐driven BNs without a priori distributions, a large number of intervals for a variable can result in zero frequencies and a failure to be adequately representative of the natural system (Uusitalo, 2007 ).…”
Section: Challenges In Bayesian Network Modeling Of South African Wat...mentioning
confidence: 99%
“…The naive Bayesian classifier is a classifier based on the Bayes’ theorem. 34,35 For a sample { x 1 , x 2 , , x n } , the probability of this sample belonging to label c can be expressed by Bayes’ theorem as…”
Section: The Tan Classifier and Its Adaboost Algorithmmentioning
confidence: 99%
“…The variables are discretized into three levels using the equal frequency method (see e.g. Ropero et al, 2018). A BN over these variables together with the binary variable Muscle Loss (ML) is learnt using an hill-climbing algorithm and reported in Figure 6 (left).…”
Section: Cancer-associated Muscle Wastingmentioning
confidence: 99%