2011
DOI: 10.2139/ssrn.1910169
|View full text |Cite
|
Sign up to set email alerts
|

Sparse Models and Methods for Optimal Instruments with an Application to Eminent Domain

Abstract: Abstract. We develop results for the use of Lasso and Post-Lasso methods to form first-stage predictions and estimate optimal instruments in linear instrumental variables (IV) models with many instruments, p. Our results apply even when p is much larger than the sample size, n. We show that the IV estimator based on using Lasso or Post-Lasso in the first stage is root-n consistent and asymptotically normal when the first-stage is approximately sparse; i.e.when the conditional expectation of the endogenous vari… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

7
209
0

Year Published

2014
2014
2023
2023

Publication Types

Select...
7
1

Relationship

1
7

Authors

Journals

citations
Cited by 101 publications
(232 citation statements)
references
References 47 publications
7
209
0
Order By: Relevance
“…These techniques are applicable to the first stage of instrumental variable analysis as well. In particular, a set of papers has already introduced regularization into the first stage in a high-dimensional setting, including the LASSO (Belloni, Chen, Chernozhukov, and Hansen 2012) and ridge regression (Carrasco 2012;Hansen and Kozbur 2014). More recent extensions include nonlinear functional forms, all the way to neural nets (Hartford, Leyton-Brown, and Taddy 2016).…”
Section: Prediction In the Service Of Estimationmentioning
confidence: 99%
“…These techniques are applicable to the first stage of instrumental variable analysis as well. In particular, a set of papers has already introduced regularization into the first stage in a high-dimensional setting, including the LASSO (Belloni, Chen, Chernozhukov, and Hansen 2012) and ridge regression (Carrasco 2012;Hansen and Kozbur 2014). More recent extensions include nonlinear functional forms, all the way to neural nets (Hartford, Leyton-Brown, and Taddy 2016).…”
Section: Prediction In the Service Of Estimationmentioning
confidence: 99%
“…In Subsection 4.1, we analyze non-parametrically the relationship between bank foreign funding and credit supply. And, in Subsection 4.2, we use the Least Absolute Shrinkage and Selection Operator (LASSO) method in Belloni et al (2012) to select the optimal parametrization of the instrument and to avoid choosing the functional form of the instrument in an ad hoc manner. 14 We test the robustness of the results to alternative definitions of the instrument in Subsection 4.6.…”
Section: Instrumentmentioning
confidence: 99%
“…A useful structure which has been employed in the recent econometrics literature focusing on inference in high dimensional settings is approximate sparsity; see, for example, , Belloni, Chen, Chernozhukov, and Hansen (2012), and . A leading example is the approximately sparse linear regression model which is characterized by having many covariates of which only a small number are important for predicting the outcome.…”
Section: Introductionmentioning
confidence: 99%
“…data even when perfect variable selection is not feasible; see, e.g., Candès and Tao (2007), Meinshausen and Yu (2009), Bickel, Ritov, and Tsybakov (2009), Huang, Horowitz, and Wei (2010, and the references therein. Such methods have also been shown to extend to nonparametric and non-Gaussian cases as in Bickel, Ritov, and Tsybakov (2009) and Belloni, Chen, Chernozhukov, and Hansen (2012), the latter of which also allows for conditional heteroscedasticity.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation