2018
DOI: 10.1080/01621459.2018.1429276
|View full text |Cite
|
Sign up to set email alerts
|

Excess Optimism: How Biased is the Apparent Error of an Estimator Tuned by SURE?

Abstract: Nearly all estimators in statistical prediction come with an associated tuning parameter, in one way or another. Common practice, given data, is to choose the tuning parameter value that minimizes a constructed estimate of the prediction error of the estimator; we focus on Stein's unbiased risk estimator, or SURE (Stein, 1981;Efron, 1986), which forms an unbiased estimate of the prediction error by augmenting the observed training error with an estimate of the degrees of freedom of the estimator. Parameter tun… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
17
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 16 publications
(17 citation statements)
references
References 52 publications
0
17
0
Order By: Relevance
“…Though kernels are theoretically well suited to high dimensions (Diaconis, Goel, and Holmes 2008; El Karoui 2010), large numbers of variables still create practical problems, both computational and interpretative. Statistically, recent theoretical advances in selective inference (Taylor and Tibshirani 2015), risk estimation for tuning parameters (Tibshirani and Rosset 2016), and subsampling (Boutsidis, Mahoney, and Drineas 2009; Gu, Jeon, and Lin 2013; Homrighausen and McDonald 2016) offer paths forward in high-dimensional space. In the meantime, our analysis suggests that KRLS can produce interpretable and theoretically useful estimates on a perennial topic of interest: the behavior of American voters.…”
Section: Resultsmentioning
confidence: 99%
“…Though kernels are theoretically well suited to high dimensions (Diaconis, Goel, and Holmes 2008; El Karoui 2010), large numbers of variables still create practical problems, both computational and interpretative. Statistically, recent theoretical advances in selective inference (Taylor and Tibshirani 2015), risk estimation for tuning parameters (Tibshirani and Rosset 2016), and subsampling (Boutsidis, Mahoney, and Drineas 2009; Gu, Jeon, and Lin 2013; Homrighausen and McDonald 2016) offer paths forward in high-dimensional space. In the meantime, our analysis suggests that KRLS can produce interpretable and theoretically useful estimates on a perennial topic of interest: the behavior of American voters.…”
Section: Resultsmentioning
confidence: 99%
“…We prove the theorem in Section 2.3, providing a bit of commentary here by discussing subset regression estimators, as in the paper [4,Sec. 4], then discussing more general smoothers and estimates [2].…”
Section: A Bound On the Excess Degrees Of Freedommentioning
confidence: 99%
“…Note that Z = Y − θ 0 and E[Z] = 0, and define the estimated Y i by Y i = [H s(Y ) (Y )] i . Then following [4], we see that if we define the excess degrees of freedom…”
Section: Introduction and Settingmentioning
confidence: 96%
See 2 more Smart Citations