2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2015
DOI: 10.1109/icassp.2015.7178631
|View full text |Cite
|
Sign up to set email alerts
|

A consensus-based decentralized algorithm for non-convex optimization with application to dictionary learning

Abstract: In handling massive-scale signal processing problems arising from 'big-data' applications, key technologies could come from the development of decentralized algorithms. In this context, consensusbased methods have been advocated because of their simplicity, fault tolerance and versatility. This paper presents a new consensus-based decentralized algorithm for a class of non-convex optimization problems that arises often in inference and learning problems, including 'sparse dictionary learning' as a special case… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
14
0

Year Published

2016
2016
2023
2023

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 26 publications
(14 citation statements)
references
References 25 publications
0
14
0
Order By: Relevance
“…Our results can be contrasted with [17]- [21] wherein gradient schemes tailored with consensus/diffusion updates are employed for some instance of (P1). The aforementioned schemes do not have any convergence guarantees: it is postulated that the sequence generated by the algorithms is convergent (see, e.g., [20], [21]), and then concluded that any limit point is a stationary solution of the problem. Furthermore, some of these schemes do not even achieve consensus among the local variables.…”
Section: B Discussionmentioning
confidence: 99%
“…Our results can be contrasted with [17]- [21] wherein gradient schemes tailored with consensus/diffusion updates are employed for some instance of (P1). The aforementioned schemes do not have any convergence guarantees: it is postulated that the sequence generated by the algorithms is convergent (see, e.g., [20], [21]), and then concluded that any limit point is a stationary solution of the problem. Furthermore, some of these schemes do not even achieve consensus among the local variables.…”
Section: B Discussionmentioning
confidence: 99%
“…It follows that using an algorithm in A or A , it takes at least M/3 iterations for the non-zero x r [2] and the corresponding gradient vector to propagate to at least one node in set [2M/3 + 1, M ]. Once we have x j [2] = 0 for some j ∈ [2M/3 + 1, M ], then according to (57), it is possible to have ∂h j (x j ) ∂x j [3] = 0, and once this gradient becomes non-zero, the corresponding variable…”
Section: Lemma 32mentioning
confidence: 99%
“…The problem (1) and (2) have been studied extensively in the literature when f i 's are all convex; see for example [4][5][6]. Primal based methods such as distributed subgradient (DSG) method [4], the EXTRA method [6], as well as primal-dual based methods such as distributed augmented Lagrangian method [7], Alternating Direction Method of Multipliers (ADMM) [8,9] have been proposed.On the contrary, only recently there have been works addressing the more challenging problems without assuming convexity of f i ; see [1,3,[10][11][12][13][14][15][16][17][18][19][20][21][22][23]. The convergence behavior of the distributed consensus problem (1) has been studied in [3,10,11].…”
mentioning
confidence: 99%
See 1 more Smart Citation
“…In the second one (and also in [12] where a more extended version can be found), a distributed version of the centralized K-SVD algorithm is proposed, while convergence analysis of the algorithm is presented in [13]. Additionally, in [14], a consensus-based, distributed algorithm for general inference / learning problems is proposed which can also be applied for the problem of dictionary learning. An online algorithm has appeared in [15], where the recursive least squares algorithm is employed.…”
Section: Introductionmentioning
confidence: 99%