2018
DOI: 10.1080/01621459.2017.1285776
|View full text |Cite
|
Sign up to set email alerts
|

On the Effect of Bias Estimation on Coverage Accuracy in Nonparametric Inference

Abstract: Nonparametric methods play a central role in modern empirical work. While they provide inference procedures that are more robust to parametric misspecification bias, they may be quite sensitive to tuning parameter choices. We study the effects of bias correction on confidence interval coverage in the context of kernel density and local polynomial regression estimation, and prove that bias correction can be preferred to undersmoothing for minimizing coverage error and increasing robustness to tuning parameter c… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

8
286
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
9
1

Relationship

2
8

Authors

Journals

citations
Cited by 302 publications
(294 citation statements)
references
References 40 publications
8
286
0
Order By: Relevance
“…We fix the main and bias bandwidths to be the same following Calonico et al (2018), who argue that that is an optimal choice. All local regressions use a triangular kernel.…”
Section: Empirical Strategymentioning
confidence: 99%
“…We fix the main and bias bandwidths to be the same following Calonico et al (2018), who argue that that is an optimal choice. All local regressions use a triangular kernel.…”
Section: Empirical Strategymentioning
confidence: 99%
“…The choiceε n in (8) is derived from a particular tradeoff between the worst-case size distortion if the models were overlapping with the worst-case power loss if the models were not overlapping. In principle, it would be possible to derive datadriven choices ofε n using other criteria, such as weighted size distortion and power loss or error in rejection probability (e.g., as in Calonico, Cattaneo, and Farrell 2016). One attractive feature of the trade-off presented here is the simplicity of the resulting choice in (8).…”
Section: Data-driven Regularization Parametermentioning
confidence: 99%
“…Similarly to the results for the Sharp RD, the asymptotic covariance formula converges to the small-h asymptotic covariance both as h → 0, or for fixed-h if in the bandwidth around the cutoff, f o (x) and σ εη (x) are constant. 8 …”
Section: Fuzzy Regression Discontinuitymentioning
confidence: 99%