2018
DOI: 10.1109/tsp.2018.2868269
|View full text |Cite
|
Sign up to set email alerts
|

Iteratively Linearized Reweighted Alternating Direction Method of Multipliers for a Class of Nonconvex Problems

Abstract: In this paper, we consider solving a class of nonconvex and nonsmooth problems frequently appearing in signal processing and machine learning research. The traditional alternating direction method of multipliers encounters troubles in both mathematics and computations in solving the nonconvex and nonsmooth subproblem. In view of this, we propose a reweighted alternating direction method of multipliers. In this algorithm, all subproblems are convex and easy to solve. We also provide several guarantees for the c… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
17
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
3
1

Relationship

3
6

Authors

Journals

citations
Cited by 34 publications
(17 citation statements)
references
References 47 publications
(76 reference statements)
0
17
0
Order By: Relevance
“…However, according to the definition of q,ε -norm (2), problem (5) is nonconvex and the traditional ADMM algorithm can not solve this problem effectively. In paper [10], the reweighted ADMM is proposed to solve a similar problem. We employ the same techniques proposed in [10].…”
Section: The Nonconvex Algorithmmentioning
confidence: 99%
“…However, according to the definition of q,ε -norm (2), problem (5) is nonconvex and the traditional ADMM algorithm can not solve this problem effectively. In paper [10], the reweighted ADMM is proposed to solve a similar problem. We employ the same techniques proposed in [10].…”
Section: The Nonconvex Algorithmmentioning
confidence: 99%
“…Therefore, problems (4), (10) and (12) are non-convex and it is difficult to obtain the global optimum. In what follows, we design heuristic optimization algorithms to solve the problems locally via the penalty dual decomposition (PDD) method and the successive convex approximation (SCA) methods [30]- [33].…”
Section: Design Of Optimization Algorithmsmentioning
confidence: 99%
“…dual ascent method) and in particular achieves convergence without the need of specific assumptions for the objective function, i.e. strict convexity and finiteness [30]- [33]. The smaller the value of ρ, the greater the probability of equality (33d) holds.…”
Section: B Convexity Of Problemmentioning
confidence: 99%
“…The Lena image is used in the numerical experiments. We solve (3) when q = 0.5, and use the nonconvex ADMM proposed in [9] for comparison. The performance of the proposed deblurring algorithms is routinely measured by means of the signal-to-noise ratio (SNR)…”
Section: Application To Image Deblurringmentioning
confidence: 99%