2020
DOI: 10.3982/qe1199
|View full text |Cite
|
Sign up to set email alerts
|

Simple and honest confidence intervals in nonparametric regression

Abstract: We consider the problem of constructing honest confidence intervals (CIs) for a scalar parameter of interest, such as the regression discontinuity parameter, in nonparametric regression based on kernel or local polynomial estimators. To ensure that our CIs are honest, we use critical values that take into account the possible bias of the estimator upon which the CIs are based. We show that this approach leads to CIs that are more efficient than conventional CIs that achieve coverage by undersmoothing or subtra… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
37
0

Year Published

2021
2021
2025
2025

Publication Types

Select...
6
1

Relationship

2
5

Authors

Journals

citations
Cited by 50 publications
(37 citation statements)
references
References 44 publications
0
37
0
Order By: Relevance
“…The confidence band in Section 2.2 builds on the important work of [5] and [10] in constructing an upper bound on bias and using this to widen the confidence interval (see also [1,11,17] for confidence intervals for f at a point in the nonadaptive case). In contrast to these papers, which derive bounds on the bias of an estimator with bandwidth selected using Lepski's method, we bound the bias directly for each bandwidth and use the width of the resulting confidence band to choose the bandwidth (note, however, that the two approaches are related, since the bound on the bias ultimately comes from comparisons of estimates at different bandwidths, either explicitly in our approach, or implicitly through the use of Lepski's method to choose the bandwidth).…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…The confidence band in Section 2.2 builds on the important work of [5] and [10] in constructing an upper bound on bias and using this to widen the confidence interval (see also [1,11,17] for confidence intervals for f at a point in the nonadaptive case). In contrast to these papers, which derive bounds on the bias of an estimator with bandwidth selected using Lepski's method, we bound the bias directly for each bandwidth and use the width of the resulting confidence band to choose the bandwidth (note, however, that the two approaches are related, since the bound on the bias ultimately comes from comparisons of estimates at different bandwidths, either explicitly in our approach, or implicitly through the use of Lepski's method to choose the bandwidth).…”
Section: Discussionmentioning
confidence: 99%
“…To describe these results formally, let I n,α,F denote the set of confidence bands that satisfy the coverage requirement (1). Subject to this coverage requirement, we compare worst-case length of C n over a possibly smaller class G. Letting length(A) = sup A − inf A denote the length of a set A, let…”
Section: Introductionmentioning
confidence: 99%
“…To assess these rules, we use the visualization approach proposed in Noack and Rothe (2021), described and implemented in Appendix A.2. These visualizations suggest that the Armstrong and Kolesár (2020) ROT is quite conservative and allows for g and p to be quite nonsmooth. The second ROT delivers more optimistic smoothness bounds that generate reasonably smooth conditional mean functions.…”
Section: Empirical Applicationmentioning
confidence: 91%
“…) is given by Equation ( 3), with Y i replaced by h(X i ). See Armstrong and Kolesár (2020) and Kolesár and Rothe (2018) for details. An appealing feature of the bias-aware CI is that because it accounts for the exact finite-sample bias of the estimator, it is valid under any bandwidth sequence, including using a fixed bandwidth; for example, the bandwidth h may be selected to minimize the (worst case over  RD (M)) mean squared error or the length of the resulting CI.…”
Section: Estimation and Inference With Measurement Errormentioning
confidence: 99%
“…For optimization, we need to make assumptions about the behavior of potential outcomes away from the threshold: first, we assume that linear extrapolation of potential outcomes is valid in a neighborhood around the threshold. Violations of this assumption introduce biases, and a highly relevant literature discusses related issues [ 18 20 ]. Our assumption is much stronger than continuity assumptions near the threshold; however, similar assumptions are often already necessary for the RD design to have statistical power.…”
Section: Introductionmentioning
confidence: 99%