2013
DOI: 10.1016/j.jhydrol.2012.11.015
|View full text |Cite
|
Sign up to set email alerts
|

Advancing monthly streamflow prediction accuracy of CART models using ensemble learning paradigms

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

5
83
0
1

Year Published

2014
2014
2023
2023

Publication Types

Select...
6
2
1

Relationship

1
8

Authors

Journals

citations
Cited by 171 publications
(89 citation statements)
references
References 32 publications
5
83
0
1
Order By: Relevance
“…First, CART does not need a priori information about data. This enables considering a bigger variety of possible model specifications when the combination of continuous and categorical data is available for use (Erdal and Karakurt 2012). Even if training set holds some irrelevant information (e.g.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…First, CART does not need a priori information about data. This enables considering a bigger variety of possible model specifications when the combination of continuous and categorical data is available for use (Erdal and Karakurt 2012). Even if training set holds some irrelevant information (e.g.…”
Section: Resultsmentioning
confidence: 99%
“…When there is no data structure hypotheses are available, non-parametric analysis becomes more effective. Since, the need for any additional assumptions concerning model errors distribution disappears (Erdal and Karakurt 2012).…”
Section: Resultsmentioning
confidence: 99%
“…They are usually showed as systems of interconnected "neurons" that can count different values from inputs via feeding information through the network. As the algorithm develops, this method is mature and has been adopted into the module of the software [6][7][8][9][10][11][12].…”
Section: Artificial Neural Networkmentioning
confidence: 99%
“…[12], where the authors demonstrated the advantages of an improved version of boosting, namely, AdaBoost.RT, which is compared to other learning methods for several benchmarking problems, and two problems involving river flow forecasting. In a recent study [13], the authors investigated the potential usage of bagging and boosting in building classification and regression tree ensembles to refine the accuracy of streamflow predictions. They report that the bagged model performs slightly better than the boosted model in the testing phase.…”
Section: Introductionmentioning
confidence: 99%