2017
DOI: 10.1214/17-ejs1287
|View full text |Cite
|
Sign up to set email alerts
|

Optimal two-step prediction in regression

Abstract: High-dimensional prediction typically comprises two steps: variable selection and subsequent least-squares refitting on the selected variables. However, the standard variable selection procedures, such as the lasso, hinge on tuning parameters that need to be calibrated. Cross-validation, the most popular calibration scheme, is computationally costly and lacks finite sample guarantees. In this paper, we introduce an alternative scheme, easy to implement and both computationally and theoretically efficient.MSC 2… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
9
0

Year Published

2018
2018
2021
2021

Publication Types

Select...
5
5

Relationship

3
7

Authors

Journals

citations
Cited by 19 publications
(9 citation statements)
references
References 39 publications
0
9
0
Order By: Relevance
“…Furthermore, our essay does not provide any guidance on how to select tuning parameters in practice; indeed, the tuning parameters in Lemma 2.1 depend on the noise ε, which is unknown in practice. For ideas on the practical selection of the lasso tuning parameter with finite sample guarantees, we refer to [14,15]. For ideas on how to make the selection of tuning parameters independent of unknown model aspects, we refer to [17,27] and the square-root/scaled lasso example in the following section.…”
Section: General Resultsmentioning
confidence: 99%
“…Furthermore, our essay does not provide any guidance on how to select tuning parameters in practice; indeed, the tuning parameters in Lemma 2.1 depend on the noise ε, which is unknown in practice. For ideas on the practical selection of the lasso tuning parameter with finite sample guarantees, we refer to [14,15]. For ideas on how to make the selection of tuning parameters independent of unknown model aspects, we refer to [17,27] and the square-root/scaled lasso example in the following section.…”
Section: General Resultsmentioning
confidence: 99%
“…A second vein of literature concerns data-adaptive methods for the Lasso. Examples include (i) [40], who study the asymptotic properties of optimally tuned (with respect to mean square error) Lasso estimators in connection with solutions to adaptively tuned approximate message passing algorithms; (ii) AV ∞ of [21], who provide ℓ ∞ estimation error guarantees; (iii) AV Pr of [20], who provide guarantees for a post-lasso procedure under prediction error loss; (iv) stability selection for variable selection in [39] and subsequent work by [47]; (v) LinSelect of [7,28]; and (vi) [13], who cite the oracle bounds of [55,56,57] to establish the asymptotic guarantees in prediction loss for the cross-validated highly adaptive Lasso (HAL) estimator.…”
Section: Estimation Error Boundsmentioning
confidence: 99%
“…While this work is focused exclusively on the TREX, for completeness, we mention some alternative approaches to tuning parameter calibration. Calibration schemes for the LASSO have been introduced and studied in various papers, including [15,16,17,29,39,42,43]. A LASSO-type algorithm with variable selection guarantees was introduced in [37].…”
Section: Related Literaturementioning
confidence: 99%