2022
DOI: 10.1007/s13042-022-01679-4
|View full text |Cite
|
Sign up to set email alerts
|

Sparse multi-label feature selection via dynamic graph manifold regularization

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
8

Relationship

0
8

Authors

Journals

citations
Cited by 21 publications
(10 citation statements)
references
References 33 publications
0
10
0
Order By: Relevance
“…Similar to Theorem 2, we can prove that the iteration scheme (28) converges to the unique solution of the regularized SCAD problem of the form…”
Section: Convergencementioning
confidence: 68%
“…Similar to Theorem 2, we can prove that the iteration scheme (28) converges to the unique solution of the regularized SCAD problem of the form…”
Section: Convergencementioning
confidence: 68%
“…The iterative reweighted algorithms were generalized in [37] to tackle a problem class of the sum of a convex function and a nondecreasing function applied to another convex function. In [14], a general framework of multi-stage convex relaxation was given, which iteratively refines the convex relaxation to achieve better solutions for nonconvex optimization problems. It includes previous approaches including LQA, LLA and the concave-convex procedure (CCCP) [38] as special cases.…”
Section: Methodsmentioning
confidence: 99%
“…To remedy this problem, many nonconvex sparse recovery methods have been employed to better approximate the ℓ 0 -norm and enhance sparsity. They include ℓ p (0 < p < 1) [10][11][12], smoothed L0 (SL0) [13], Capped-L1 [14], transformed ℓ 1 (TL1) [15], smooth clipped absolute deviation (SCAD) [9], minimax concave penalty (MCP) [16], nonconvex shrinkage methods, [17], exponential-type penalty (ETP) [18,19], error function (ERF) method [20], ℓ 1 − ℓ 2 [21,22], ℓ r r − αℓ r 1 (α ∈ [0, 1], r ∈ (0, 1]) [23], ℓ 1 /ℓ 2 [24,25], q-ratio sparsity minimization [26] and smoothed ℓ p -over-ℓ q (SPOQ) [27], among others. For a more comprehensive view, please see the survey on nonconvex regularization [28] and the references therein.…”
Section: Introductionmentioning
confidence: 99%
“…A key problem is how to choose the regularization function g(•) to improve the reconstruction accuracy and reduce the computational cost. Although the ℓ 2,1 -norm regularization gets its popularity and has been widely used in various applications, it has been shown to be suboptimal in many cases (particularly, it cannot enforce further sparsity when applied to CS) [39][40][41], since the ℓ 0 -norm is a loose approximation of the ℓ 1 -norm and often leads to an over-penalized problem. Consequently, some further improvements are required.…”
Section: Non-convex Optimization For Signal Reconstructionmentioning
confidence: 99%