2019
DOI: 10.48550/arxiv.1901.10593
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Decentralized Online Learning: Take Benefits from Others' Data without Sharing Your Own to Track Global Trend

Abstract: Decentralized Online Learning (online learning in decentralized networks) attracts more and more attention, since it is believed that Decentralized Online Learning can help the data providers cooperatively better solve their online problems without sharing their private data to a third party or other providers. Typically, the cooperation is achieved by letting the data providers exchange their models between neighbors, e.g., recommendation model. However, the best regret bound for a decentralized online learni… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
27
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
4

Relationship

1
7

Authors

Journals

citations
Cited by 11 publications
(27 citation statements)
references
References 31 publications
0
27
0
Order By: Relevance
“…The communication model includes delays, and the regret bound depends on a quantity related to the mixing time of a certain random walk on the network. Zhao et al [18] study a decentralized online learning setting in which losses are characterized by two components, one adversarial and another stochastic. They show upper bounds on the regret in terms of a constant representing the magnitude of the adversarial component and another constant measuring the randomness of the stochastic part.…”
Section: Related Workmentioning
confidence: 99%
“…The communication model includes delays, and the regret bound depends on a quantity related to the mixing time of a certain random walk on the network. Zhao et al [18] study a decentralized online learning setting in which losses are characterized by two components, one adversarial and another stochastic. They show upper bounds on the regret in terms of a constant representing the magnitude of the adversarial component and another constant measuring the randomness of the stochastic part.…”
Section: Related Workmentioning
confidence: 99%
“…Examples of these works include Kamp et al (2014); Shahrampour and Jadbabaie (2017); Lee et al (2016). Notably, Zhao et al (2019) shares similar problem definition and theoretical result as our paper. However, single sided communication is not allowed in their setting, making their results more restrictive.…”
Section: Related Workmentioning
confidence: 52%
“…t for any i and t, then Algorithm 1 reduces to the distributed online gradient method proposed by Zhao et al (2019).…”
mentioning
confidence: 99%
“…However, little attention has been paid to understand the impact of the network size on the average regret achievable at each individual agent therein. To reap the potential benefits that an agent can achieve when carrying out distributed OCO, a convex loss function with both adversarial and stochastic components is considered in [36]. Assuming that the expected gradient is bounded above by 𝐺 and the stochastic variance is bounded above by 𝜎 2 , they have shown that the network expected regret is 𝑂 ( √ 𝑁 2 𝐺 2 𝑇 + 𝑁𝑇 𝜎 2 ).…”
Section: Related Workmentioning
confidence: 99%
“…Remark 2. The classical DOGD algorithm [36] cannot achieve such performance gain in the setting here, because essentially DOGD performs a consensus step followed by a gradient descent along the local gradient ∇𝑓 𝑡,𝑖 (𝑥 𝑡,𝑖 ). For a fixed learning rate, DOGD only converges to a neighborhood of the optimizer 𝑥 * , because the local gradient is data-driven and hence random.…”
Section: Performance Analysismentioning
confidence: 99%