1996
DOI: 10.1007/bf00117832
|View full text |Cite
|
Sign up to set email alerts
|

Stacked regressions

Abstract: Abstract. Stacking regressions is a method for forming linear combinations of different predictors to give improved prediction accuracy. The idea is to use cross-validation data and least squares under non-negativity constraints to determine the coefficients in the combination. Its effectiveness is demonstrated in stacking regression trees of different sizes and in a simulation stacking linear subset and ridge regressions. Reasons why this method works are explored. The idea of stacking originated with Wolpert… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

15
2,593
0
25

Year Published

2000
2000
2024
2024

Publication Types

Select...
7
2
1

Relationship

0
10

Authors

Journals

citations
Cited by 2,382 publications
(2,633 citation statements)
references
References 11 publications
15
2,593
0
25
Order By: Relevance
“…Among conventional binary image classification methods, two of the most common are support vector machine (SVM) [23] and random forest (RF) [24]. The RF method was adopted for this study because of its robustness and effectiveness in the classification of varying object types and ease of the execution.…”
Section: Coarse Extraction Of Windthrown Treesmentioning
confidence: 99%
“…Among conventional binary image classification methods, two of the most common are support vector machine (SVM) [23] and random forest (RF) [24]. The RF method was adopted for this study because of its robustness and effectiveness in the classification of varying object types and ease of the execution.…”
Section: Coarse Extraction Of Windthrown Treesmentioning
confidence: 99%
“…We assessed SAR σ 0 AGB based on polarizations (HH and HV), the years under investigation (2007-2010), and a combination of both using random forest and linear regression algorithms for polarimetric and yearly AGB estimations, respectively. Breiman et al and Breiman [69,70] proposed random forest (RF) as ensemble learning for regression and classification trees, with successive trees not dependent on earlier trees (bootstrapping). In bagging, the best predictors are randomly chosen to split the tree, making RF a robust classifier against overfitting [71].…”
Section: Estimating Aboveground Biomass From Sarmentioning
confidence: 99%
“…A classification method that produces many recursive partitioning models, each with a random selection of predictor properties, and then combines the models for predictions [62].…”
Section: Random Forest Methodsmentioning
confidence: 99%