1979
DOI: 10.1080/00401706.1979.10489815
|View full text |Cite
|
Sign up to set email alerts
|

Ridge Regression and James-Stein Estimation: Review and Comments

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
41
0

Year Published

1990
1990
2005
2005

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 164 publications
(42 citation statements)
references
References 52 publications
1
41
0
Order By: Relevance
“…A fuller review is given in Draper and Van Nostrand (1979), Hocking (1976) and Judge et al (1985). A fuller review is given in Draper and Van Nostrand (1979), Hocking (1976) and Judge et al (1985).…”
Section: Bias and Variance Of Estimatesmentioning
confidence: 99%
“…A fuller review is given in Draper and Van Nostrand (1979), Hocking (1976) and Judge et al (1985). A fuller review is given in Draper and Van Nostrand (1979), Hocking (1976) and Judge et al (1985).…”
Section: Bias and Variance Of Estimatesmentioning
confidence: 99%
“…The L 1 and L 2 penalty forms are similar to the penalties used for shrinkage in lasso and ridge regression respectively [22,23]. …”
Section: Resultsmentioning
confidence: 99%
“…A simple way to obtain_this is to add a multiple of the pa x p2 identity matrix I to E 2 2 before inverting it. This technique is called ridge regression , and is the subject matter of [8], [21], [22], and many other papers. Because (10) In our application x is surely not Gaussian.…”
Section: Predictionmentioning
confidence: 99%
“…These two areas converge in part in their concluding that the inversion and prediction problems can both be improved by, in effect, reducing the ratio between large eigenvalues and small ones, all necessarily real, of the matrix whose inverse figures in best linear unbiased estimation. Variations of how to do that are sometimes termed "ridge regression" [8]. We show that a very simple version of ridge regression with validated choice of ridge parameter not only improves prediction in P-PTSVQ but also diminishes coding error.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation