2020
DOI: 10.1007/s42081-020-00089-6
|View full text |Cite
|
Sign up to set email alerts
|

Robust ridge M-estimators with pretest and Stein-rule shrinkage for an intercept term

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2

Citation Types

0
4
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
8

Relationship

3
5

Authors

Journals

citations
Cited by 9 publications
(4 citation statements)
references
References 32 publications
0
4
0
Order By: Relevance
“…The idea was proposed by Bancroft [36] who formulated pretest estimators that incorporate a preliminary hypothesis test into estimation. Judge and Bock [37] extensively studied pretest estimators with applications to econometrics; see also more recent works [38][39][40][41][42][43]. Among all, we particularly note that Shih et al [41] proposed the general pretest (GPT) estimator that includes empirical Bayes and Types I-II shrinkage pretest estimators for the univariate normal mean.…”
Section: Estimation Under Sparse Meansmentioning
confidence: 97%
“…The idea was proposed by Bancroft [36] who formulated pretest estimators that incorporate a preliminary hypothesis test into estimation. Judge and Bock [37] extensively studied pretest estimators with applications to econometrics; see also more recent works [38][39][40][41][42][43]. Among all, we particularly note that Shih et al [41] proposed the general pretest (GPT) estimator that includes empirical Bayes and Types I-II shrinkage pretest estimators for the univariate normal mean.…”
Section: Estimation Under Sparse Meansmentioning
confidence: 97%
“…We now consider discrete shrinkage schemes by pre-testing 𝐻 : 𝜇 = 0 vs. 𝐻 : 𝜇 ≠ 0 for 𝑖 = 1,2, … , 𝐺. The idea was proposed by Bancroft [36], who developed pretest estimators (see also more recent works [20,[37][38][39][40][41][42][43]). In the meta-analytic context, Taketomi et al [20] adopted the general pretest (GPT) estimator of Shih et al [41], which is defined as follows:…”
Section: Estimators Under Sparse Meansmentioning
confidence: 99%
“…To overcome this difficulty, Yang and Emura [15] suggested reducing the number of shrinkage parameters for the case of p > n in their formulation of a generalized ridge estimator. The idea behind their approach was to assign two different weights (1 or 0.5) for shrinkage parameters via preliminary tests [22][23][24][25][26]. While this approach has sound statistical performance in sparse and high-dimensional models, no software packages were implemented for the generalized ridge estimator.…”
Section: Introductionmentioning
confidence: 99%