2015
DOI: 10.1016/j.csda.2015.07.001
|View full text |Cite
|
Sign up to set email alerts
|

Moderately clipped LASSO

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

2
10
0

Year Published

2016
2016
2024
2024

Publication Types

Select...
4
2
1
1

Relationship

2
6

Authors

Journals

citations
Cited by 19 publications
(12 citation statements)
references
References 26 publications
2
10
0
Order By: Relevance
“…The MCL penalty is a smooth interpolation between the MCP and LASSO penalty that becomes the MCP when γ = 0 and the LASSO penalty when γ = λ (Figure 1). The MCL penalty has two regularization parameters λ and γ, where λ controls the sparsity of the MCL as the MCP and γ decides the amount of shrinkage for the nonzero regression coefficients as the LASSO (Kwon et al, 2015) penalty. Therefor, we expect that the MCL selects relevant predictive variables as the MCP (Zhang, 2010) and produces high prediction accuracy as the LASSO (Zhang and Huang, 2008) for finite samples.…”
Section: Definitionmentioning
confidence: 99%
See 2 more Smart Citations
“…The MCL penalty is a smooth interpolation between the MCP and LASSO penalty that becomes the MCP when γ = 0 and the LASSO penalty when γ = λ (Figure 1). The MCL penalty has two regularization parameters λ and γ, where λ controls the sparsity of the MCL as the MCP and γ decides the amount of shrinkage for the nonzero regression coefficients as the LASSO (Kwon et al, 2015) penalty. Therefor, we expect that the MCL selects relevant predictive variables as the MCP (Zhang, 2010) and produces high prediction accuracy as the LASSO (Zhang and Huang, 2008) for finite samples.…”
Section: Definitionmentioning
confidence: 99%
“…There have been many penalties that combine two different penalties for preserving desirable properties of the combined penalties. Examples include the elastic net (Zou and Hastie, 2005), sparse ridge (Kwon et al, 2013) and moderately clipped LASSO (Kwon et al, 2015). The elastic net combines the LASSO and ridge for variable selection that also deals with highly correlated variables, which is the same idea for the sparse ridge that combines the MCP and ridge.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…The smoothly clipped absolute deviations penalty (SCAD, [3]) is another folded concave penalty that performs similarly to the MCP. Some other variants of the Lasso include the adaptive Lasso ( [4]), the relaxed Lasso ( [5]), Moderately Clipped Lasso ( [6]), etc. The regularization methods with the above penalties can perform well for variable selection in high-dimensional data settings when strong correlation does not appear among many covariates.…”
Section: Introductionmentioning
confidence: 99%
“…In this paper, we develop an R package ncpen for the non-convex penalized estimation based on the convex-concave procedure (CCCP) or difference-convex (DC) algorithm (Kim et al, 2008;Shen et al, 2012) and the modified local quadratic approximation algorithm (MLQA) (Lee et al, 2016). The main contribution of the package ncpen is that it encompasses most of existing non-convex penalties, including the truncated ℓ 1 (Shen et al, 2013), log (Zou and Li, 2008;Friedman, 2012), bridge (Huang et al, 2008), moderately clipped LASSO (Kwon et al, 2015), sparse ridge (Huang et al, 2013;Kwon et al, 2013) penalties as well as the SCAD and MCP and covers a broader range of regression models: multinomial and Cox models as well as the GLM. Further, ncpen provides two unique options: the investigation of initial dependent solution paths and non-standardization of input variables, which allow the users more flexibility.…”
Section: Introductionmentioning
confidence: 99%