2011
DOI: 10.2139/ssrn.1910753
|View full text |Cite
|
Sign up to set email alerts
|

Square-Root Lasso: Pivotal Recovery of Sparse Signals via Conic Programming

Abstract: Abstract. We propose a pivotal method for estimating high-dimensional sparse linear regression models, where the overall number of regressors p is large, possibly much larger than n, but only s regressors are significant. The method is a modification of the lasso, called the square-root lasso. The method is pivotal in that it neither relies on the knowledge of the standard deviation σ or nor does it need to pre-estimate σ. Moreover, the method does not rely on normality or sub-Gaussianity of noise. It achieves… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

13
514
0
1

Year Published

2013
2013
2022
2022

Publication Types

Select...
6
1
1

Relationship

1
7

Authors

Journals

citations
Cited by 271 publications
(528 citation statements)
references
References 26 publications
13
514
0
1
Order By: Relevance
“…In this case, we can apply the results of the paper substituting d i for z i in each instance where instruments z i are used since d i is conditionally exogenous and thus a valid instrument for itself. All of the simulation results are based on data generated as Specifically, we apply the Square-Root Lasso of Belloni, Chernozhukov, and Wang (2011) with outcome Y and covariates (D D * X 1 D * X p (1 − D) (1 − D) * X 1 (1 − D) * X p ) to select variables. We set the penalty level in the Square-Root Lasso using the "exact" option of Belloni, Chernozhukov, and Wang (2011) under the assumption of homoscedastic, Gaussian errors ζ i with the tuning confidence level required in Belloni, Chernozhukov, and Wang (2011) set equal to 95%.…”
Section: Appendix P: Simulation Experimentsmentioning
confidence: 99%
See 1 more Smart Citation
“…In this case, we can apply the results of the paper substituting d i for z i in each instance where instruments z i are used since d i is conditionally exogenous and thus a valid instrument for itself. All of the simulation results are based on data generated as Specifically, we apply the Square-Root Lasso of Belloni, Chernozhukov, and Wang (2011) with outcome Y and covariates (D D * X 1 D * X p (1 − D) (1 − D) * X 1 (1 − D) * X p ) to select variables. We set the penalty level in the Square-Root Lasso using the "exact" option of Belloni, Chernozhukov, and Wang (2011) under the assumption of homoscedastic, Gaussian errors ζ i with the tuning confidence level required in Belloni, Chernozhukov, and Wang (2011) set equal to 95%.…”
Section: Appendix P: Simulation Experimentsmentioning
confidence: 99%
“…All of the simulation results are based on data generated as Specifically, we apply the Square-Root Lasso of Belloni, Chernozhukov, and Wang (2011) with outcome Y and covariates (D D * X 1 D * X p (1 − D) (1 − D) * X 1 (1 − D) * X p ) to select variables. We set the penalty level in the Square-Root Lasso using the "exact" option of Belloni, Chernozhukov, and Wang (2011) under the assumption of homoscedastic, Gaussian errors ζ i with the tuning confidence level required in Belloni, Chernozhukov, and Wang (2011) set equal to 95%. After running the Square-Root Lasso, we then estimate regression coefficients by regressing Y onto only those variables that were estimated to have nonzero coefficients by the Square-Root Lasso.…”
Section: Appendix P: Simulation Experimentsmentioning
confidence: 99%
“…We will choose a penalty λ that dominates the estimation error with large probability. This principle of selecting the penalty λ is motivated by [4] and [3]. It is worth noting that this is a general principle of choosing the penalty and can be applied to many other problems.…”
Section: Choice Of Penaltymentioning
confidence: 99%
“…In practice, the Gaussian assumption may not hold and the estimation of the standard deviation σ is not a trivial problem. In a recent paper, [3] proposed the square-root lasso method, where the knowledge of the distribution or variance are not required. Instead, some moment assumptions of the errors and design matrix are needed.…”
Section: Introductionmentioning
confidence: 99%
“…In a recent effort, the sparse iterative covariance-based estimator (SPICE) [19] utilizes a criteria for covariance fitting, originally developed within array processing, to form sparse estimates without the need of selecting hyperparameters. In fact, SPICE may shown to be equivalent to the square root (SR) LASSO [20]; in a covariance fitting sense, SPICE may be as a result be viewed as the optimal selection of the SR LASSO hyperparameter [21]. In this paper, we extend the method proposed in [22], which generalizes SPICE for grouped variables, along the lines of [23] to form recursive estimates in an online-fashion, reminiscent to the approach used in [24].…”
Section: Introductionmentioning
confidence: 99%