2018
DOI: 10.1137/18m1171989
|View full text |Cite
|
Sign up to set email alerts
|

Composite Optimization by Nonconvex Majorization-Minimization

Abstract: The minimization of a nonconvex composite function can model a variety of imaging tasks. A popular class of algorithms for solving such problems are majorization-minimization techniques which iteratively approximate the composite nonconvex function by a majorizing function that is easy to minimize. Most techniques, e.g. gradient descent, utilize convex majorizers in order to guarantee that the majorizer is easy to minimize. In our work we consider a natural class of nonconvex majorizers for these functions, an… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
6
3

Relationship

1
8

Authors

Journals

citations
Cited by 17 publications
(10 citation statements)
references
References 63 publications
0
10
0
Order By: Relevance
“…This is a linearized variant of the parametric majorization bound and as such a nonconvex composite majorizer in the sense of [39], as such a key property of majorizationminimization techniques remains in the parametrized setting, choosingx i = x i (θ k ):…”
Section: Iterative Majorizersmentioning
confidence: 99%
“…This is a linearized variant of the parametric majorization bound and as such a nonconvex composite majorizer in the sense of [39], as such a key property of majorizationminimization techniques remains in the parametrized setting, choosingx i = x i (θ k ):…”
Section: Iterative Majorizersmentioning
confidence: 99%
“…In practice, sequentially minimizing convex problems with weighted-1 priors is indeed much simpler than minimizing a non-convex problem with a log-sum prior. From a convergence point of view, multiple works have recently shown that the set of minimizers resulting from a reweighted-1 procedure coincides with the one obtained by minimizing a problem with log-sum prior (Ochs et al 2015;Geiping & Moeller 2018;Ochs et al 2019;. In the following paragraphs, the SARA (Carrillo et al 2012) and HyperSARA (Abdulaziz et al 2019b) log-sum priors are presented, considered as benchmarks to assess the proposed spatio-spectral faceted prior.…”
Section: State-of-the-art Average Sparsity Priorsmentioning
confidence: 97%
“…As proposed by the reweighting framework, problem ( 7) is solved several times, considering different weighting matrices W, in order to mimic the 0 pseudo-norm. Note that the reweighting framework is underpinned by recent theoretical results [47][48][49][50] showing that sequentially minimizing convex problems with weighted-1 priors corresponds to solving for a critical point of a non-convex problem with a log-sum prior, itself a close approximation to the target 0 prior. Considering one weighting cycle, a simple algorithm to solve problem (7) is the FB algorithm [51].…”
Section: Algorithm For Constrained Weighted 1 Minimizationmentioning
confidence: 99%