2020
DOI: 10.1109/tnnls.2019.2935001
|View full text |Cite
|
Sign up to set email alerts
|

A Semismooth Newton Algorithm for High-Dimensional Nonconvex Sparse Learning

Abstract: The smoothly clipped absolute deviation (SCAD) and the minimax concave penalty (MCP) penalized regression models are two important and widely used nonconvex sparse learning tools that can handle variable selection and parameter estimation simultaneously, and thus have potential applications in various fields such as mining biological data in high-throughput biomedical studies. Theoretically, these two models enjoy the oracle property even in the high-dimensional settings, where the number of predictors p may b… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0

Year Published

2020
2020
2025
2025

Publication Types

Select...
8

Relationship

0
8

Authors

Journals

citations
Cited by 16 publications
(10 citation statements)
references
References 57 publications
(141 reference statements)
0
10
0
Order By: Relevance
“…This creates various difficulties in construction of iterative computational algorithms in finding this oracle estimator. Following the ideas of [5,15], we develop a primal and dual active sets algorithm (PDAS) for computations. After that we introduce the sequential version of PDAS algorithm with a warm-start strategy.…”
Section: Pdas and Spdasmentioning
confidence: 99%
See 1 more Smart Citation
“…This creates various difficulties in construction of iterative computational algorithms in finding this oracle estimator. Following the ideas of [5,15], we develop a primal and dual active sets algorithm (PDAS) for computations. After that we introduce the sequential version of PDAS algorithm with a warm-start strategy.…”
Section: Pdas and Spdasmentioning
confidence: 99%
“…Nevertheless, since (1.2) is a non-convex non-smooth optimisation problem, it is difficult to develop a stable efficient computational algorithm for its solution, especially in high-dimensional and sparse settings. Inspired by [5,15], we construct a primal and dual active set (PDAS) algorithm for solving the minimisation problem (1.2). Our approach is motivated by the KKT conditions of the hard thresholding regularised problem.…”
Section: Introductionmentioning
confidence: 99%
“…But the minimization problem (1.2) is one non-convex and non-smooth optimization problem, it is not easy to design numerical algorithm to obtain this oracle estimator. Inspired by [12,13,17], we propose a primal dual active set algorithm (PDAS) for fixed regularization parameter λ. Then coupled with a warm-start strategy as its globalization, we have PDAS with continuation (PDASC) method.…”
Section: Primal Dual Active Set (Pdas) Algorithm With Continuationmentioning
confidence: 99%
“…Inspired by [8,12,15,17], we will propose a primal dual active set algorithm (PDAS) to compute the optimal solution to (1.2). PDAS can be viewed as a generalized Newton method, which involves two steps for each iteration.…”
Section: Introductionmentioning
confidence: 99%
“…To obtain the theoretical complexity of the algorithm, the following should be considered: (i) the construction of linear model on which the variable selection procedure will be applied (i.e., the estimate of the conditional expectations in ( 8)) and (ii) the application of the variable selection procedure to such linear model. For a fixed value ∈ 1 n (for the definition of 1 n , see Remark 1) and the given tuning parameters h, w n , and , the theoretical complexity for (i) is O(n 2 w n ), while for (ii), the theoretical complexity of the more computationally efficient algorithm is O(nw n ) (Shi et al 2020). Therefore, the theoretical complexity of the proposed FASSMR algorithm is O(n 2 w n ] 1 n ).…”
Section: Outputs Of Fassmrmentioning
confidence: 99%