2017
DOI: 10.3390/e19100520
|View full text |Cite
|
Sign up to set email alerts
|

Entropy Ensemble Filter: A Modified Bootstrap Aggregating (Bagging) Procedure to Improve Efficiency in Ensemble Model Simulation

Abstract: Over the past two decades, the Bootstrap AGGregatING (bagging) method has been widely used for improving simulation. The computational cost of this method scales with the size of the ensemble, but excessively reducing the ensemble size comes at the cost of reduced predictive performance. The novel procedure proposed in this study is the Entropy Ensemble Filter (EEF), which uses the most informative training data sets in the ensemble rather than all ensemble members created by the bagging method. The results of… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
14
0
1

Year Published

2018
2018
2022
2022

Publication Types

Select...
6
1
1

Relationship

2
6

Authors

Journals

citations
Cited by 13 publications
(15 citation statements)
references
References 23 publications
0
14
0
1
Order By: Relevance
“…The principle of maximum entropy states that the probability function correctly describing a dataset is the one with the largest entropy S. The entropy (12) is about a particular scattering at some fixed-s. To each s i one can construct a dataset taking the pair 0, G inel (s i , b(s i )) , i.e. the line contained in [0, b(s i )].…”
Section: Final Remarksmentioning
confidence: 99%
See 1 more Smart Citation
“…The principle of maximum entropy states that the probability function correctly describing a dataset is the one with the largest entropy S. The entropy (12) is about a particular scattering at some fixed-s. To each s i one can construct a dataset taking the pair 0, G inel (s i , b(s i )) , i.e. the line contained in [0, b(s i )].…”
Section: Final Remarksmentioning
confidence: 99%
“…The TE (12) can also be related to the amount of information in the area of width k −1 and the curve given by m 1− nG inel (s, b) depending on each s used. Of course, m 1 − nG inel (s, b) is limited to the range ∀ i : 0, G inel (s i , b(s i )) , and, therefore, this area assumes a finite value as well as the amount of information one can obtain from it.…”
Section: Final Remarksmentioning
confidence: 99%
“…By comparison, the authors of [ 48 ] state that “if a model has many free parameters—for instance, a complex budget constraint or complex household preferences—then the model is relatively nonparsimonious”. Such a statement is not relevant, and overfitting issues [ 52 ] do not occur in the scope of ME optimization for the profile fitting. On the contrary, both parsimony and multiple parameters helped to arrive at a competitive local extremum, which has equivalent goodness of fit compared to others extremum candidates.…”
Section: Identification Of Biomass Growth Modelmentioning
confidence: 99%
“…Once an event is observed, and which of the m classes it belongs to is identified, our uncertainty about the outcome decreases to 0. Therefore, information can be characterized as decrease an observer's uncertainty about the outcome (Krstanovic and Singh, 1992;Mogheir et al, 2006;Samuel et al, 2013;Foroozand and Weijs, 2017;Foroozand et al, 2018). For monitoring networks, the information each sensor provides through its observations (outcomes) is therefore linked to the uncertainty of those outcomes before measurement.…”
Section: Information Theory Termsmentioning
confidence: 99%