2016
DOI: 10.5705/ss.202014.0011
|View full text |Cite
|
Sign up to set email alerts
|

The Mnet method for variable selection

Abstract: We propose a penalized approach for variable selection using a combination of minimax concave and ridge penalties. The method is designed to deal with p ≥ n problems with highly correlated predictors. We call this the Mnet method. Similar to the elastic net of Zou and Hastie (2005), the Mnet tends to select or drop highly correlated predictors together. However, unlike the elastic net, the Mnet is selection consistent and equal to the oracle ridge estimator with high probability under reasonable conditions. We… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
59
0

Year Published

2016
2016
2024
2024

Publication Types

Select...
7

Relationship

4
3

Authors

Journals

citations
Cited by 45 publications
(61 citation statements)
references
References 22 publications
2
59
0
Order By: Relevance
“…However, when a group of variables have high correlations, the LASSO, MCP, and SCAD may select only one of the variables, leading to possibly inferior prediction performance than the ridge regression. 6,7 In medical research, individual biomarker may have useful but limited power to predict clinical outcomes. For instance, the accuracy of urine IL-18 and urine NGAL for diagnosis of severe AKI is moderate.…”
Section: Penalized Generalized Linear Regressionmentioning
confidence: 99%
See 3 more Smart Citations
“…However, when a group of variables have high correlations, the LASSO, MCP, and SCAD may select only one of the variables, leading to possibly inferior prediction performance than the ridge regression. 6,7 In medical research, individual biomarker may have useful but limited power to predict clinical outcomes. For instance, the accuracy of urine IL-18 and urine NGAL for diagnosis of severe AKI is moderate.…”
Section: Penalized Generalized Linear Regressionmentioning
confidence: 99%
“…A reparameterization is often convenient for implementation: ϕ = λ 1 + λ 2 and α = λ 1 / ϕ . 7,8 Therefore, for a fixed α value, we can compute solutions for a decreasing sequence of ϕ . Specifically, for the candidate values of ϕ , we consider a decreasing sequence of ϕ k , k = 1, …, K from ϕ max to ϕ min on the log scale with ϕ min = εϕ max .…”
Section: Parameter Estimationmentioning
confidence: 99%
See 2 more Smart Citations
“…In the future, it is a potential direction for us to try different penalty functions proved to be useful in the one-step estimation procedures, like the MNet penalty [19] and the weight fused elastic-net penalty [20,21] for dealing with highly correlated variables in high-dimensional regression problems.…”
Section: Discussionmentioning
confidence: 99%