2019 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW) 2019
DOI: 10.1109/ipdpsw.2019.00157
|View full text |Cite
|
Sign up to set email alerts
|

Random Walk Gradient Descent for Decentralized Learning on Graphs

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
2
1

Relationship

2
1

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 7 publications
0
2
0
Order By: Relevance
“…Parts of these results pertaining to Algorithms 1 and 2 have already appeared in a conference paper [20].…”
Section: Moreover We Propose a Privacy-preserving Version Of Them Bas...mentioning
confidence: 95%
“…Parts of these results pertaining to Algorithms 1 and 2 have already appeared in a conference paper [20].…”
Section: Moreover We Propose a Privacy-preserving Version Of Them Bas...mentioning
confidence: 95%
“…2) Otherwise, if the candidate node gets rejected, the random walk stays at the same node, i.e., i (k+1) = i (k) . b) Static Weighted Random Walk: This algorithm assigns a static importance metric to each node that is proportional to the gradient-Lipschitz constant of the local loss function [42], [50]. The random walk is designed by the MH again with a stationary distribution that is proportional to the local gradient-Lipschitz constants.…”
Section: A Baseline Algorithmsmentioning
confidence: 99%