2016 50th Asilomar Conference on Signals, Systems and Computers 2016
DOI: 10.1109/acssc.2016.7869518
|View full text |Cite
|
Sign up to set email alerts
|

Distributed dictionary learning

Abstract: The paper studies distributed Dictionary Learning (DL) problems where the learning task is distributed over a multi-agent network with time-varying (nonsymmetric) connectivity. This formulation is relevant, for instance, in Big Data scenarios where massive amounts of data are collected/stored in different spatial locations and it is unfeasible to aggregate and/or process all data in a fusion center, due to resource limitations, communication overhead or privacy considerations. We develop a general distributed … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
6
0

Year Published

2016
2016
2023
2023

Publication Types

Select...
7

Relationship

1
6

Authors

Journals

citations
Cited by 11 publications
(6 citation statements)
references
References 22 publications
0
6
0
Order By: Relevance
“…Recent works also include the study on optimal convergence rates with respect to the network dependency for strongly convex [26] and convex [27] problems. When the problem becomes non-convex, many algorithms such as primal-dual based methods [28,10], gradient tracking based methods [11,29], and non-convex extensions of DGD methods [30] have been proposed, where the O(ǫ −1 ) iteration and communication complexity have been shown. Recently, optimal algorithm with respect to the network dependency has also been proposed in [13] with O(γ −1/2 × ǫ −1 ) computation and O(ǫ −1 ) communication complexity, where γ denotes the spectral gap of the communication graph G. Note that the above algorithms all require O(1) full gradient evaluations per iteration, so when directly applied to solve problems where each f i (•) takes the form in (2), they all require O(mnǫ −1 ) local data samples.…”
Section: Decentralized Optimizationmentioning
confidence: 99%
“…Recent works also include the study on optimal convergence rates with respect to the network dependency for strongly convex [26] and convex [27] problems. When the problem becomes non-convex, many algorithms such as primal-dual based methods [28,10], gradient tracking based methods [11,29], and non-convex extensions of DGD methods [30] have been proposed, where the O(ǫ −1 ) iteration and communication complexity have been shown. Recently, optimal algorithm with respect to the network dependency has also been proposed in [13] with O(γ −1/2 × ǫ −1 ) computation and O(ǫ −1 ) communication complexity, where γ denotes the spectral gap of the communication graph G. Note that the above algorithms all require O(1) full gradient evaluations per iteration, so when directly applied to solve problems where each f i (•) takes the form in (2), they all require O(mnǫ −1 ) local data samples.…”
Section: Decentralized Optimizationmentioning
confidence: 99%
“…References [14,15] propose a primal-dual based method for unconstrained problem over a connected network, and derives a global convergence rate for this setting. In [13,17,18], the authors utilize certain gradient tracking idea to solve a constrained nonsmooth distributed problem over possibly time-varying networks. The work [19] summarizes a number of recent progress in extending the DSG-based methods for non-convex problems.…”
Section: Distributed Non-convex Optimizationmentioning
confidence: 99%
“…The problem (1) and (2) have been studied extensively in the literature when f i 's are all convex; see for example [4][5][6]. Primal based methods such as distributed subgradient (DSG) method [4], the EXTRA method [6], as well as primal-dual based methods such as distributed augmented Lagrangian method [7], Alternating Direction Method of Multipliers (ADMM) [8,9] have been proposed.On the contrary, only recently there have been works addressing the more challenging problems without assuming convexity of f i ; see [1,3,[10][11][12][13][14][15][16][17][18][19][20][21][22][23]. The convergence behavior of the distributed consensus problem (1) has been studied in [3,10,11].…”
mentioning
confidence: 99%
See 1 more Smart Citation
“…Recently and independently from our conference work (Daneshmand et al, 2016), Zhao et al (2016) proposed a distributed primal-dual-based method for a class of dictionary learning problems related, but different from Problem P. Specifically, they considered: quadratic loss functions f i , with a quadratic regularization on the dictionary (i.e., G = 0), and norm ball constraints on the private variables. The network is modeled as a fixed undirected graph.…”
Section: Challenges and Related Workmentioning
confidence: 99%