2022
DOI: 10.48550/arxiv.2202.01545
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Byzantine-Robust Decentralized Learning via Self-Centered Clipping

Abstract: In this paper, we study the challenging task of Byzantine-robust decentralized training on arbitrary communication graphs. Unlike federated learning where workers communicate through a server, workers in the decentralized environment can only talk to their neighbors, making it harder to reach consensus. We identify a novel dissensus attack in which few malicious nodes can take advantage of information bottlenecks in the topology to poison the collaboration. To address these issues, we propose a Self-Centered C… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
11
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(11 citation statements)
references
References 25 publications
(43 reference statements)
0
11
0
Order By: Relevance
“…To solve this limitation, Guo et al (2022) propose an algorithm called uniform Byzantine-resilient aggregation rule (UBAR) that combines distance-based and performance-based aggregators. However, this method will fail in the non-IID data setting (He, Karimireddy, and Jaggi 2022). There are only a few works on Byzantine-robust decentralized learning in the non-IID data setting (He, Karimireddy, and Jaggi 2022;Wu, Chen, and Ling 2023;Li et al 2017).…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…To solve this limitation, Guo et al (2022) propose an algorithm called uniform Byzantine-resilient aggregation rule (UBAR) that combines distance-based and performance-based aggregators. However, this method will fail in the non-IID data setting (He, Karimireddy, and Jaggi 2022). There are only a few works on Byzantine-robust decentralized learning in the non-IID data setting (He, Karimireddy, and Jaggi 2022;Wu, Chen, and Ling 2023;Li et al 2017).…”
Section: Related Workmentioning
confidence: 99%
“…In particular, some devices can be faulty, referred to as Byzantine workers (Hegedűs, Danner, and Jelasity 2021), due to software/hardware errors or getting hacked, and send arbitrary or malicious model updates to other devices, thus severely degrading the overall performance. To address Byzantine attacks in the training process, a few Byzantine-robust decentralized learning algorithms have been introduced recently (Yang and Bajwa 2019;Guo et al 2022;Fang, Yang, and Bajwa 2022;He, Karimireddy, and Jaggi 2022), where benign workers attempt to combine the updates received from their neighbors by using robust aggregation rules to mitigate the impact of potential Byzantine workers. Most current algorithms deal with Byzantine attacks under independent and identically distributed (IID) data across the devices; however, in reality, the data can vary dramatically across the devices in terms of quantity, label distribution, and feature distribution (Zhao et al 2018;Hsieh et al 2020).…”
Section: Introductionmentioning
confidence: 99%
“…For the scenario of optimal parameters, each honest agent sets the parameters b of CTM and IOS as the number of its Byzantine neighbors. In SCC, the clipping threshold τ is determines according to Theorem 3 in [27]. The results are shown in Figs.…”
Section: A Case 1: Synthetic Problemmentioning
confidence: 99%
“…This is different to the resource allocation problem, where the honest agents are expected to obtain different optimal solutions (namely, allocated resources). Some works focus on deterministic problems [19], [20], [21], [22], [23], [24], [25] and some others consider stochastic problems [26], [27]. Their common feature is to let each honest agent aggregate possibly malicious messages (namely, optimization variables) received from its neighbors in a robust manner.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation