This paper studies an asymptotic framework for conducting inference on parameters of the form φ(θ 0 ), where φ is a known directionally differentiable function and θ 0 is estimated byθ n . In these settings, the asymptotic distribution of the plug-in estimator φ(θ n ) can be readily derived employing existing extensions to the Delta method. We show, however, that (full) differentiability of φ is a necessary and sufficient condition for bootstrap consistency whenever the limiting distribution ofθ n is Gaussian. An alternative resampling scheme is proposed which remains consistent when the bootstrap fails, and is shown to provide local size control under restrictions on the directional derivative of φ. We illustrate the utility of our results by developing a test of whether a Hilbert space valued parameter belongs to a convex seta setting that includes moment inequality problems, tests of random utility models, and certain tests of shape restrictions as special cases (e.g. tests of monotonicity of the pricing kernel or of parametric conditional quantile model specifications).
We propose a method for using instrumental variables (IV) to draw inference about causal effects for individuals other than those affected by the instrument at hand. Policy relevance and external validity turn on the ability to do this reliably. Our method exploits the insight that both the IV estimand and many treatment parameters can be expressed as weighted averages of the same underlying marginal treatment effects. Since the weights are identified, knowledge of the IV estimand generally places some restrictions on the unknown marginal treatment effects, and hence on the values of the treatment parameters of interest. We show how to extract information about the treatment parameter of interest from the IV estimand and, more generally, from a class of IV‐like estimands that includes the two stage least squares and ordinary least squares estimands, among others. Our method has several applications. First, it can be used to construct nonparametric bounds on the average causal effect of a hypothetical policy change. Second, our method allows the researcher to flexibly incorporate shape restrictions and parametric assumptions, thereby enabling extrapolation of the average effects for compliers to the average effects for different or larger populations. Third, our method can be used to test model specification and hypotheses about behavior, such as no selection bias and/or no selection on gain.
We thank Justin McCrary, Nicolas Lepage-Saucier, Graham Elliott, Michael Jansson as well as the editor Jason Abrevaya and two anonymous referees for comments that helped greatly improve the paper. The views expressed herein are those of the authors and do not necessarily reflect the views of the National Bureau of Economic Research. NBER working papers are circulated for discussion and comment purposes. They have not been peerreviewed or been subject to the review by the NBER Board of Directors that accompanies official NBER publications.
We show by example that empirical likelihood and other commonly used tests for moment restrictions are unable to control the (exponential) rate at which the probability of a Type I error tends to zero. It follows that the optimality of empirical likelihood asserted in Kitamura (2001) does not hold without additional assumptions. Under stronger assumptions than those in Kitamura (2001), we establish the following optimality result: (i) empirical likelihood controls the rate at which the probability of a Type I error tends to zero and (ii) among all procedures for which the probability of a Type I error tends to zero at least as fast, empirical likelihood maximizes the rate at which probability of a Type II error tends to zero for "most" alternatives.This result further implies that empirical likelihood maximizes the rate at which probability of a Type II error tends to zero for all alternatives among a class of tests that satisfy a weaker criterion for their Type I error probabilities.
This paper studies the properties of the wild bootstrap-based test proposed in Cameron et al. (2008) in settings with clustered data. Cameron et al. (2008) provide simulations that suggest this test works well even in settings with as few as five clusters, but existing theoretical analyses of its properties all rely on an asymptotic framework in which the number of clusters is "large." In contrast to these analyses, we employ an asymptotic framework in which the number of clusters is "small," but the number of observations per cluster is "large." In this framework, we provide conditions under which the limiting rejection probability of an un-Studentized version of the test does not exceed the nominal level. Importantly, these conditions require, among other things, certain homogeneity restrictions on the distribution of covariates. We further establish that the limiting rejection probability of a Studentized version of the test does not exceed the nominal level by more than an amount that decreases exponentially with the number of clusters. We study the relevance of our theoretical results for finite samples via a simulation study.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.