2020
DOI: 10.1002/asjc.2489
|View full text |Cite
|
Sign up to set email alerts
|

Distributed adaptive online learning for convex optimization with weight decay

Abstract: This paper investigates an adaptive gradient‐based online convex optimization problem over decentralized networks. The nodes of a network aim to track the minimizer of a global time‐varying convex function, and the communication pattern among nodes is captured as a connected undirected graph. To tackle such optimization problems in a collaborative and distributed manner, a weight decay distributed adaptive online gradient algorithm, called WDDAOG, is firstly proposed, which incorporates distributed optimizatio… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
10
0
1

Year Published

2021
2021
2024
2024

Publication Types

Select...
5

Relationship

2
3

Authors

Journals

citations
Cited by 5 publications
(11 citation statements)
references
References 42 publications
0
10
0
1
Order By: Relevance
“…Thus, we need to solve the following optimization problem minimizexFFfalse(boldxfalse)=t=1Ti=1nfi,tfalse(boldxfalse),\begin{equation} \begin{aligned} \underset{{\bf x} \in \mathcal {F}}{\operatornamewithlimits{minimize}}\quad F({\bf x})= \sum _{t=1}^{T} \sum _{i=1}^{n} f_{i, t}({\bf x}), \end{aligned} \end{equation}where fi,t:scriptFdouble-struckR$f_{i, t}: \mathcal {F} \rightarrow \mathbb {R}$ is a time‐varying convex and potentially nonsmooth objective function. To embody the online essence of the problem (4), we select the dynamic regret [8, 29, 30] as the metric, in which the cumulative loss of all agents is compared against the best sequence {boldxt}t=1T$\lbrace {\bf x}_{t}^{*}\rbrace _{t=1}^{T}$, that is R(T)badbreak=i=1nt=1Tfi,t()boldxi,tgoodbreak−t=1Tft()boldxt,\begin{equation} R(T)= \sum _{i=1}^{n} \sum _{t=1}^{T} f_{i, t}{\left({\bf x}_{i, t}\right)}-\sum _{t=1}^{T} f_{t}{\left({\bf x}_{t}^{*}\right)}, \end{equation}where xt=prefixargminboldxscriptFft(x)${\bf x}_{t}^{*}=\arg \min _{{\bf x} \in \mathcal {F}} f_{t}({\bf x})$. If the dynamic regret of an online algorithm is sublinear, that is, ...…”
Section: Problem Statement and Algorithmmentioning
confidence: 99%
See 1 more Smart Citation
“…Thus, we need to solve the following optimization problem minimizexFFfalse(boldxfalse)=t=1Ti=1nfi,tfalse(boldxfalse),\begin{equation} \begin{aligned} \underset{{\bf x} \in \mathcal {F}}{\operatornamewithlimits{minimize}}\quad F({\bf x})= \sum _{t=1}^{T} \sum _{i=1}^{n} f_{i, t}({\bf x}), \end{aligned} \end{equation}where fi,t:scriptFdouble-struckR$f_{i, t}: \mathcal {F} \rightarrow \mathbb {R}$ is a time‐varying convex and potentially nonsmooth objective function. To embody the online essence of the problem (4), we select the dynamic regret [8, 29, 30] as the metric, in which the cumulative loss of all agents is compared against the best sequence {boldxt}t=1T$\lbrace {\bf x}_{t}^{*}\rbrace _{t=1}^{T}$, that is R(T)badbreak=i=1nt=1Tfi,t()boldxi,tgoodbreak−t=1Tft()boldxt,\begin{equation} R(T)= \sum _{i=1}^{n} \sum _{t=1}^{T} f_{i, t}{\left({\bf x}_{i, t}\right)}-\sum _{t=1}^{T} f_{t}{\left({\bf x}_{t}^{*}\right)}, \end{equation}where xt=prefixargminboldxscriptFft(x)${\bf x}_{t}^{*}=\arg \min _{{\bf x} \in \mathcal {F}} f_{t}({\bf x})$. If the dynamic regret of an online algorithm is sublinear, that is, ...…”
Section: Problem Statement and Algorithmmentioning
confidence: 99%
“…where f i,t ∶  → ℝ is a time-varying convex and potentially nonsmooth objective function. To embody the online essence of the problem (4), we select the dynamic regret [8,29,30] as the metric, in which the cumulative loss of all agents is compared against the best sequence {x * t } T t =1 , that is…”
mentioning
confidence: 99%
“…If the benchmark is a sequence of time-varying optimal solutions, the regret is called dynamic regret. For instance, the authors in Shen et al [12] proposed a weight decay distributed adaptive online gradient algorithm proving a dynamic regret bound of …”
Section: ( √ Tmentioning
confidence: 99%
“…In the regret analysis, we prove the individual regret to be sublinear by establishing relationship between the network regret and the individual regret. In particular, substituting (13) into (12), the difference between R 𝑗 (t) and R (T) can be written as a cumulative sum of network disagreement terms and reflects the consensus errors among agents.…”
Section: Lemma 2 Consider the Following Sequence {Xmentioning
confidence: 99%
See 1 more Smart Citation