1997
DOI: 10.1007/bfb0020178
|View full text |Cite
|
Sign up to set email alerts
|

A boosting algorithm for regression

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
20
0

Year Published

1999
1999
2010
2010

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 26 publications
(21 citation statements)
references
References 3 publications
1
20
0
Order By: Relevance
“…weak learner ) on a weighted training sample. The base learner is expected to return at each iteration t a hypothesis h t from the hypothesis set H that has small weighted training error (see (4)) or large edge (see (18)). These hypotheses are then linearly combined to form the final or composite hypothesis f T as in (1).…”
Section: Leveraging As Stagewise Greedy Optimizationmentioning
confidence: 99%
See 3 more Smart Citations
“…weak learner ) on a weighted training sample. The base learner is expected to return at each iteration t a hypothesis h t from the hypothesis set H that has small weighted training error (see (4)) or large edge (see (18)). These hypotheses are then linearly combined to form the final or composite hypothesis f T as in (1).…”
Section: Leveraging As Stagewise Greedy Optimizationmentioning
confidence: 99%
“…17 A Matlab implementation can be downloaded at http://mlg.anu.edu.au/˜raetsch/software. 18 Here we force the w's to be non-negative, which can be done without loss of generality, if the hypothesis set is closed under negation.…”
Section: Regularization Terms and Sparsenessmentioning
confidence: 99%
See 2 more Smart Citations
“…Although experimental work shows that algorithms related to AdaBoost.R (H, 1997;Ridgeway, Madigan, & Richardson, 1999;Bertoni, Campadelli, & Parodi, 1997) can be effective, it suffers from two drawbacks.…”
Section: Regressionmentioning
confidence: 99%