2014
DOI: 10.1109/tsp.2014.2298839
|View full text |Cite
|
Sign up to set email alerts
|

Sparse Signal Estimation by Maximally Sparse Convex Optimization

Abstract: Abstract-This paper addresses the problem of sparsity penalized least squares for applications in sparse signal processing, e.g. sparse deconvolution. This paper aims to induce sparsity more strongly than L1 norm regularization, while avoiding nonconvex optimization. For this purpose, this paper describes the design and use of non-convex penalty functions (regularizers) constrained so as to ensure the convexity of the total cost function, F, to be minimized. The method is based on parametric penalty functions,… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
131
0
1

Year Published

2014
2014
2023
2023

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 162 publications
(132 citation statements)
references
References 65 publications
(108 reference statements)
0
131
0
1
Order By: Relevance
“…The Logarithmic and Arctangent (parametric) penalty functions were introduced in [7]; it also focused on the conditions to be met by non-convex penalty functions so as to ensure the convexity of the total cost function of (1). While these penalty functions exhibit less bias (than the 1 -norm) they are particularly convenient in algorithms for solving (1) that do not used their corresponding thresholding rules directly but the derivative of their penalty functions, such IRLS [14], FOCUSS [15] and (majorization-minimization) MM-based [16].…”
Section: A Non 1 -Norm Penalty Functionsmentioning
confidence: 99%
See 1 more Smart Citation
“…The Logarithmic and Arctangent (parametric) penalty functions were introduced in [7]; it also focused on the conditions to be met by non-convex penalty functions so as to ensure the convexity of the total cost function of (1). While these penalty functions exhibit less bias (than the 1 -norm) they are particularly convenient in algorithms for solving (1) that do not used their corresponding thresholding rules directly but the derivative of their penalty functions, such IRLS [14], FOCUSS [15] and (majorization-minimization) MM-based [16].…”
Section: A Non 1 -Norm Penalty Functionsmentioning
confidence: 99%
“…are the same as for (1). Recently, several works [5], [6], [7], [8] have proposed or assessed the use of different penalty functions for (1); the list includes the Logarithmic and Arctangent penalty functions [6], as well as those associated with the Non-negative Garrote (NNG) [9], SCAD [10] and Firm [11] thresholding rules. The key sought-after property for these alternatives is to induce sparsity more strongly than the 1 -norm penalty function.…”
Section: Introductionmentioning
confidence: 99%
“…As , the log and atan penalties approach the absolute value function. The atan penalty was derived to promote sparsity more strongly than the log penalty [33]. It will be useful below to define as .…”
Section: A Problem Formulationmentioning
confidence: 99%
“…Hence, non-convex penalties should be specified with care. One approach to avoid the issue of entrapment in local minima is to specify non-convex penalties such that the total objective function, , is convex [7], [26], [27], [33]. Then the total objective function, owing to its convexity, does not posses sub-optimal local minima and a global optimal solution can be reliably found.…”
Section: E Setting the Non-convexity Parametersmentioning
confidence: 99%
“…As a result, these methods suffer from numerical problems for smaller values of p. Thus, an attractive solution is to employ Lipschitz continuous approximations, such as the exponential function, the logarithm function or sigmoid functions, e.g., [20,23,34]. The arctan function is also used in different literature works for sparse regularization, such as approximating the sign function appearing in the derivative of the 1 -norm term in [35], introducing a penalty function for the sparse signal estimation by the maximally-sparse convex approach in [36] or approximating the 0 -norm term through a weighted 1 -norm term in [23].…”
Section: Introductionmentioning
confidence: 99%