2019
DOI: 10.1186/s13660-019-2145-0
|View full text |Cite
|
Sign up to set email alerts
|

A regularized alternating direction method of multipliers for a class of nonconvex problems

Abstract: In this paper, we propose a regularized alternating direction method of multipliers (RADMM) for a class of nonconvex optimization problems. The algorithm does not require the regular term to be strictly convex. Firstly, we prove the global convergence of the algorithm. Secondly, under the condition that the augmented Lagrangian function satisfies the Kurdyka-Łojasiewicz property, the strong convergence of the algorithm is established. Finally, some preliminary numerical results are reported to support the effi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
4
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(4 citation statements)
references
References 23 publications
0
4
0
Order By: Relevance
“…In the following, we compare Algorithm 1.1 with the regularized ADMM (RADMM, [26]) for solving two different cases of the sparse signal recovery problem:…”
Section: Comparative Experimentsmentioning
confidence: 99%
See 1 more Smart Citation
“…In the following, we compare Algorithm 1.1 with the regularized ADMM (RADMM, [26]) for solving two different cases of the sparse signal recovery problem:…”
Section: Comparative Experimentsmentioning
confidence: 99%
“…We also apply the Bregman modification of ADMM (BADMM, [39]) and the symmetric ADMM (SADMM, [41]) to solve the problems (1.2) and (1.3) by introducing auxiliary variable y = x for the sparse objective function. The proximal matrix in [26] is G = αI − βA T A, where (α, β) = (2.5, 0.12). The Bregman distance and related parameters of [39] use the default settings as mentioned therein, while the penalty parameter is set as 0.15 since it performs better 1) Note that this is also a special case of (1.1) with f (x) = µ x 1 , g(y) = y − c 2 /2, B = −I and b = 0. than using the value 10.…”
Section: Comparative Experimentsmentioning
confidence: 99%
“…is widespread application has sparked a strong interest in further understanding the theoretical nature of the ADMM (see [14][15][16][17]).…”
Section: Introductionmentioning
confidence: 99%
“…Combining the Bregman distance, Wang et al [31] obtained the convergence of Bregman ADMM in nonconvex multi-block case. For more research on multi-block splitting methods, see [21,33,34,35,36] and related references.…”
mentioning
confidence: 99%