2019
DOI: 10.1016/j.csda.2019.04.004
|View full text |Cite
|
Sign up to set email alerts
|

Location-adjusted Wald statistics for scalar parameters

Abstract: Inference about a scalar parameter of interest is a core statistical task that has attracted immense research in statistics. The Wald statistic is a prime candidate for the task, on the grounds of the asymptotic validity of the standard normal approximation to its finite-sample distribution, simplicity and low computational cost. It is well known, though, that this normal approximation can be inadequate, especially when the sample size is small or moderate relative to the number of parameters. A novel, algebra… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4

Citation Types

0
4
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
2
1
1

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(4 citation statements)
references
References 46 publications
(58 reference statements)
0
4
0
Order By: Relevance
“…It turns out that the first‐order bias of trueψ̂j${\hat{\psi}}_{j}$ (see Kosmidis & Firth, 2010, Section 4.3., Remark 3) can be written in terms of the first‐order bias of trueθ̂j$\hat{\theta }_j$, the first two derivatives of the function exp and the inverse of the expected information matrix. The first‐order bias of general transformations of MLE is given in Di Caterina and Kosmidis (2019) and is used to derive a simple location adjustment for common Wald statistics that considerably improves the performance of Wald‐type inference. In particular when using the function exp , by Di Caterina and Kosmidis (2019, Expression 6) or Kosmidis and Firth (2010, Section 4.3., Remark 3), E(trueψ̂jψj)badbreak=Btrueψ̂j(bold-italicψ)goodbreak+O(n2)normal,0.33emjgoodbreak=1,,p,$$\begin{equation*} E(\hat{\psi }_j-\psi _j )=B_{\hat{\psi }_j}(\bm \psi )+O(n^{-2})\mbox{, }j=1,\ldots ,p, \end{equation*}$$where Bψ̂jfalse(ψfalse)=boldBboldθ̂(bold-italicθ)T00expfalse(θjfalse)00+12tr{I(bold-italicθ)1}[]000...…”
Section: Bias Of the Exponentially Transformed Parameter Estimatesmentioning
confidence: 99%
See 2 more Smart Citations
“…It turns out that the first‐order bias of trueψ̂j${\hat{\psi}}_{j}$ (see Kosmidis & Firth, 2010, Section 4.3., Remark 3) can be written in terms of the first‐order bias of trueθ̂j$\hat{\theta }_j$, the first two derivatives of the function exp and the inverse of the expected information matrix. The first‐order bias of general transformations of MLE is given in Di Caterina and Kosmidis (2019) and is used to derive a simple location adjustment for common Wald statistics that considerably improves the performance of Wald‐type inference. In particular when using the function exp , by Di Caterina and Kosmidis (2019, Expression 6) or Kosmidis and Firth (2010, Section 4.3., Remark 3), E(trueψ̂jψj)badbreak=Btrueψ̂j(bold-italicψ)goodbreak+O(n2)normal,0.33emjgoodbreak=1,,p,$$\begin{equation*} E(\hat{\psi }_j-\psi _j )=B_{\hat{\psi }_j}(\bm \psi )+O(n^{-2})\mbox{, }j=1,\ldots ,p, \end{equation*}$$where Bψ̂jfalse(ψfalse)=boldBboldθ̂(bold-italicθ)T00expfalse(θjfalse)00+12tr{I(bold-italicθ)1}[]000...…”
Section: Bias Of the Exponentially Transformed Parameter Estimatesmentioning
confidence: 99%
“…It turns out that the first-order bias of ψ𝑗 (see Kosmidis & Firth, 2010, Section 4.3., Remark 3) can be written in terms of the first-order bias of θ𝑗 , the first two derivatives of the function exp and the inverse of the expected information matrix. The first-order bias of general transformations of MLE is given in Di Caterina and Kosmidis (2019) and is used to derive a simple location adjustment for common Wald statistics that considerably improves the performance of Wald-type inference. In particular when using the function exp, by Di Caterina and Kosmidis (2019, Expression 6) or Kosmidis and Firth (2010, Section 4.3., Remark 3),…”
Section: Bias Of the Exponentially Transformed Parameter Estimatesmentioning
confidence: 99%
See 1 more Smart Citation
“…Di Caterina and Kosmidis (2019) show that there is a simple way to derive the mean bias of h(θ * ) for any three-times differentiable function h : C → D, with C ⊂ p and D ⊂ , where θ * is a mBR estimator of θ with O(N −2 ) bias. In particular, Di Caterina and Kosmidis (2019) show that the estimator h(θ * ) of ζ = h(θ) has mean bias…”
Section: Mean Bias Reduction and General Parameter Transformationsmentioning
confidence: 99%