2001
DOI: 10.2307/3316051
|View full text |Cite
|
Sign up to set email alerts
|

Simple and accurate one‐sided inference from signed roots of likelihood ratios

Abstract: The authors propose two methods based on the signed root of the likelihood ratio statistic for one-sided testing of a simple null hypothesis about a scalar parameter in the presence of nuisance parameters. Both methods are third-order accurate and utilise simulation to avoid the need for onerous analytical calculations characteristic of competing saddlepoint procedures. Moreover, the new methods do not require specification of ancillary statistics. The methods respect the conditioning associated with similar t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

2
71
0

Year Published

2001
2001
2016
2016

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 59 publications
(73 citation statements)
references
References 30 publications
2
71
0
Order By: Relevance
“…This means restricting resampling to those samples drawn from a fitted model with ancillary statistic values close to the original sample value. This can be done in Example 1, but numerical results support the view expressed by DiCiccio, Martin and Stern (2001) that when r a is easily constructed, it is likely to be preferable to bootstrapping in terms of conditional accuracy. However, a full evaluation of parametric bootstrap methods in terms of such considerations remains to be undertaken.…”
Section: Bootstraps For Parametric Likelihood Inferencesupporting
confidence: 52%
See 1 more Smart Citation
“…This means restricting resampling to those samples drawn from a fitted model with ancillary statistic values close to the original sample value. This can be done in Example 1, but numerical results support the view expressed by DiCiccio, Martin and Stern (2001) that when r a is easily constructed, it is likely to be preferable to bootstrapping in terms of conditional accuracy. However, a full evaluation of parametric bootstrap methods in terms of such considerations remains to be undertaken.…”
Section: Bootstraps For Parametric Likelihood Inferencesupporting
confidence: 52%
“…Let r † p be the version of r p based on a random vector Y † that has density f Y (y; γ , ξ γ ). DiCiccio, Martin and Stern (2001) showed that approximation of the distribution of r p by that of r † p is accurate to order O(n −3/2 ). Under this approach, a confidence set of nominal coverage 1 − α for the parameter γ of interest is {γ : r p (γ ) ≤ c 1−α (γ , ξ γ )}, where c 1−α (γ , ξ γ ) denotes the 1 − α quantile of the sampling distribution of r † p , the 1 − α quantile of the distribution of r p (γ ) when the true parameter value is (γ , ξ γ ).…”
Section: Bootstraps For Parametric Likelihood Inferencementioning
confidence: 99%
“…We have examined higher-order expansions of the distribution of p-values obtained by normal approximation and by bootstrap approximation under an asymptotic regime involving a general contiguous alternative hypothesis. Our analysis is based on the testing framework described by DiCiccio et al (2001) and Lee & Young (2005). That framework, and the conclusions of Lee & Young (2005) concerning the distribution of p-values under the null hypothesis, was extended by Stern (2006) to test statistics based on a certain class of M-estimators, and future extension of the results here concerning distributions of p-values under an alternative hypothesis to such statistics would be worthwhile.…”
Section: ·2 Multisample Normal Modelmentioning
confidence: 99%
“…We will focus in particular on comparison of the repeated sampling distribution of p-values calculated from the asymptotic normal approximation to the null sampling distribution of the statistic with the distribution of p-values calculated by bootstrap approximations to the sampling distribution of the statistic (DiCiccio et al, 2001;Lee & Young, 2005;Stern, 2006). In some generality (Lee & Young, 2005), p-values approximated analytically or by bootstrapping are known to be asymptotically uniform under the null hypothesis, with the sampling distribution of p-values obtained by bootstrap approximation being more uniformly distributed under the null hypothesis than those calculated from a normal approximation.…”
Section: Introductionmentioning
confidence: 99%
“…A key form of analytically adjusted statistic is Barndorff-Nielsen's R * statistic (Barndorff-Nielsen, 1986), which is of the form R * = R + log(U/R)/R, in terms of an analytic adjustment quantity U . DiCiccio et al (2001) and Lee & Young (2005) considered inference based on the bootstrap distribution obtained by considering the distribution of R(ψ 0 ) under sampling from the density f (y; ψ 0 ,λ 0 ). Both of these third-order accurate inference procedures are observed in many situations to achieve spectacularly low levels of error even in small sample settings.…”
Section: Introductionmentioning
confidence: 99%