2019
DOI: 10.3390/e21040394
|View full text |Cite
|
Sign up to set email alerts
|

On the Use of Entropy to Improve Model Selection Criteria

Abstract: The most widely used forms of model selection criteria, the Bayesian Information Criterion (BIC) and the Akaike Information Criterion (AIC), are expressed in terms of synthetic indicators of the residual distribution: the variance and the mean-squared error of the residuals respectively. In many applications in science, the noise affecting the data can be expected to have a Gaussian distribution. Therefore, at the same level of variance and mean-squared error, models, whose residuals are more uniformly distrib… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
22
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
1
1

Relationship

1
5

Authors

Journals

citations
Cited by 26 publications
(23 citation statements)
references
References 19 publications
1
22
0
Order By: Relevance
“…Equations (7) and (8) suggested also a consideration about the asymptote stability of AIC GF and BIC GF (where GF means Goodness of Fit); if the model was perfect, the Z score would tend to zero with increasing the number of points, and the AIC GF and BIC GF would converge to AIC H and BIC H . The asymptotic converge of the criteria using the entropy of the residuals has already been demonstrated in [9], proving the asymptotic convergence of AIC GF and BIC GF as well. The AIC GF and BIC GF of Equations (7) and (8) were, therefore, the new versions of the model selection criteria that had been tested and whose performance is described in the rest of the paper.…”
Section: Model Selection and Goodness Of Fit Testsmentioning
confidence: 59%
See 4 more Smart Citations
“…Equations (7) and (8) suggested also a consideration about the asymptote stability of AIC GF and BIC GF (where GF means Goodness of Fit); if the model was perfect, the Z score would tend to zero with increasing the number of points, and the AIC GF and BIC GF would converge to AIC H and BIC H . The asymptotic converge of the criteria using the entropy of the residuals has already been demonstrated in [9], proving the asymptotic convergence of AIC GF and BIC GF as well. The AIC GF and BIC GF of Equations (7) and (8) were, therefore, the new versions of the model selection criteria that had been tested and whose performance is described in the rest of the paper.…”
Section: Model Selection and Goodness Of Fit Testsmentioning
confidence: 59%
“…For Gaussian and flat distributed noise, all versions of both AIC and the BIC (traditional formulation, Shannon entropy, and goodness of fit) returned basically equal values in all cases. As shown in [9], for these cases, the upgraded versions-AIC H , BIC H and AIC GF , BIC GF -improve only the convergence to the right model in limited and very difficult cases. For the other two types of noise, the situation was dramatically different.…”
Section: Results For Exponential Functions Polynomials and Power Lawsmentioning
confidence: 94%
See 3 more Smart Citations