2016
DOI: 10.1002/asmb.2193
|View full text |Cite
|
Sign up to set email alerts
|

Post selection shrinkage estimation for high‐dimensional data analysis

Abstract: In high-dimensional data settings where p n, many penalized regularization approaches were studied for simultaneous variable selection and estimation. However, with the existence of covariates with weak effect, many existing variable selection methods, including Lasso and its generations, cannot distinguish covariates with weak and no contribution. Thus, prediction based on a subset model of selected covariates only can be inefficient. In this paper, we propose a post selection shrinkage estimation strategy to… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
27
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
3
2

Relationship

1
4

Authors

Journals

citations
Cited by 33 publications
(27 citation statements)
references
References 31 publications
0
27
0
Order By: Relevance
“…The following corollary shows that Theorem 3.1, together with Theorem 2 in Zhang and Huang (2008) and Corollary 2 in Gao et al (2017), further implies selection consistency for scriptS1scriptSWBCscriptS2*.…”
Section: Asymptotic Propertiesmentioning
confidence: 75%
See 4 more Smart Citations
“…The following corollary shows that Theorem 3.1, together with Theorem 2 in Zhang and Huang (2008) and Corollary 2 in Gao et al (2017), further implies selection consistency for scriptS1scriptSWBCscriptS2*.…”
Section: Asymptotic Propertiesmentioning
confidence: 75%
“…Obtain a candidate subset scriptS1true^ of strong signals using a penalized regression method. Here, we consider the penalized least squares (PLS) estimator from Gao et al (2017): bold-italicβtrue^PLS=argminbold-italicβ{yXβfalse‖22+j=1pPenλ(βj)}, where Pen λ ( β j ) is a penalty on each individual β j to shrink the weak effects toward zeros and select the strong signals, with the tuning parameter λ > 0 controlling the size of the candidate subset scriptS1true^. Commonly used penalties are Pen λ ( β j ) = λ| β j | and Pen λ ( β j ) = λ ω j | β j | for Lasso and adaptive Lasso, where ω j > 0 is a known weight.…”
Section: Methodsmentioning
confidence: 99%
See 3 more Smart Citations