2021
DOI: 10.1109/tac.2020.3033712
|View full text |Cite
|
Sign up to set email alerts
|

Online Learning Over Dynamic Graphs via Distributed Proximal Gradient Algorithm

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
9
0

Year Published

2021
2021
2025
2025

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 23 publications
(9 citation statements)
references
References 43 publications
0
9
0
Order By: Relevance
“…In general, the cost function f t can be smooth or nonsmooth, and in order to exploit the fine structure of f t when it is nonsmooth, the cost function can be considered as a sum of two functions, one is smooth and the other is a nonsmooth regularizer, i.e., composite optimization. Such problems can be naturally found in realistic applications involving low-rank, sparsity, monotonicity, and so forth [23], [24]. • DOL with Nonseparable Global Objectives.…”
Section: A Further Discussion On Problem Settingsmentioning
confidence: 99%
“…In general, the cost function f t can be smooth or nonsmooth, and in order to exploit the fine structure of f t when it is nonsmooth, the cost function can be considered as a sum of two functions, one is smooth and the other is a nonsmooth regularizer, i.e., composite optimization. Such problems can be naturally found in realistic applications involving low-rank, sparsity, monotonicity, and so forth [23], [24]. • DOL with Nonseparable Global Objectives.…”
Section: A Further Discussion On Problem Settingsmentioning
confidence: 99%
“…After that, we apply the proposed algorithms to a distributed target tracking problem, which has been widely investigated in literature, e.g., [16], [44]. Finally, we investigate the dynamic sparse signal recovery problem [17] and compare our algorithm with the existing ones.…”
Section: Numerical Examplesmentioning
confidence: 99%
“…Amongst the existing distributed online optimization algorithms, the subgradient descent methods gain considerable attention [10]- [13]. More recently, the authors of [14]- [17] developed distributed online algorithms on the basis of primal-dual method, gradient tracking, mirror descent approach, and proximal gradient algorithm, respectively. The authors of [18] further introduced a primaldual mirror descent algorithm to address distributed online problems with time-varying coupled inequality constraints over weight-balanced digraphs.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Different from the classical distributed learning with static loss functions, each node in distributed online learning has a time-varying loss function f t,i , t ∈ {0, … , T}, i ∈ {1, … , n}, with T the time horizon and n the number of learning nodes in the network. Various algorithms have been developed for distributed online learning, [8][9][10][11][12][13][14][15][16] which aims to minimize the sum of local loss functions over time horizon T. For fixed undirected graphs, a distributed online optimization algorithm based on alternating direction method of multipliers is proposed in the work of Akbari et al, 11 and a distributed mirror descent method for distributed online optimization is developed by using the Bregman divergence in projection. 12 For time-varying graphs, a multistep consensus-based distributed proximal online gradient descent algorithm over undirected networks is proposed in the work of Dixit et al 14 Extended from the dual averaging scheme, a distributed online dual averaging algorithm for distributed online constrained optimization over undirected networks is investigated in the work of Hosseini et al 15 A primal-dual algorithm is further developed for distributed online optimization with coupled constraints over balanced graphs.…”
Section: Introductionmentioning
confidence: 99%