2018
DOI: 10.1093/restud/rdy053
|View full text |Cite
|
Sign up to set email alerts
|

Two-Step Estimation and Inference with Possibly Many Included Covariates

Abstract: We study the implications of including many covariates in a first-step estimate entering a twostep estimation procedure. We find that a first order bias emerges when the number of included covariates is "large" relative to the square-root of sample size, rendering standard inference procedures invalid. We show that the jackknife is able to estimate this "many covariates" bias consistently, thereby delivering a new automatic bias-corrected two-step point estimator. The jackknife also consistently estimates the … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
42
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
8
2

Relationship

0
10

Authors

Journals

citations
Cited by 53 publications
(42 citation statements)
references
References 54 publications
0
42
0
Order By: Relevance
“…Further, cross‐fitting paired with local robustness may yield weaker smoothness conditions by providing “underfitting” robustness, that is, weakening bias‐related assumptions (Chernozhukov et al (2018)), but the cost may be too high here. Weaker variance‐related assumptions, or “overfitting” robustness (Cattaneo, Jansson, and Ma (2019)), may also be possible following deep learning, but are less automatic at present. Other methods for causal inference under relaxed assumptions may be useful here, such as extensions to doubly robust inference (Tan (2020)) or robust inverse weighting (Ma and Wang (2018)).…”
Section: Inference After Deep Learningmentioning
confidence: 99%
“…Further, cross‐fitting paired with local robustness may yield weaker smoothness conditions by providing “underfitting” robustness, that is, weakening bias‐related assumptions (Chernozhukov et al (2018)), but the cost may be too high here. Weaker variance‐related assumptions, or “overfitting” robustness (Cattaneo, Jansson, and Ma (2019)), may also be possible following deep learning, but are less automatic at present. Other methods for causal inference under relaxed assumptions may be useful here, such as extensions to doubly robust inference (Tan (2020)) or robust inverse weighting (Ma and Wang (2018)).…”
Section: Inference After Deep Learningmentioning
confidence: 99%
“…Chernozhukov et al (2018) and Newey and Robins (2018) utilized related cross-fitting techniques to reduce bias in high-dimensional estimation problems that feature machine-learning estimators. Cattaneo, Jansson, and Ma (2019) characterized the bias in (nonlinear) two-step estimators when the first step features a high-dimensional linear regression.…”
Section: Resultsmentioning
confidence: 99%
“…Cattaneo et al . () investigate the consequences of including a (moderately) large number of regressors in the first‐step regression on parameter estimates in the second step, in a general GMM framework with generated regressors. The critical rate of numerosity of regressors turns out to be m=O(n), in which case the numerosity of first‐step regressors induces an inconsistency bias in the distribution of the second‐step estimates, in addition to higher order effects on the asymptotic variance.…”
Section: Many Regressorsmentioning
confidence: 99%