2019
DOI: 10.1109/mnet.2019.1900025
|View full text |Cite
|
Sign up to set email alerts
|

Secure Distributed On-Device Learning Networks with Byzantine Adversaries

Abstract: The privacy concern exists when the central server has the copies of datasets. Hence, there is a paradigm shift for the learning networks to change from centralized in-cloud learning to distributed on-device learning. Benefit from the parallel computing, the on-device learning networks have a lower bandwidth requirement than the in-cloud learning networks. Moreover, the on-device learning networks also have several desirable characteristics such as privacy preserving and flexibility. However, the on-device lea… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
24
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
3
2
2

Relationship

1
6

Authors

Journals

citations
Cited by 26 publications
(24 citation statements)
references
References 10 publications
0
24
0
Order By: Relevance
“…• Additional mechanisms are needed to offer protection against network, physical, software, and encryption attacks. In addition, it is critical to have protection against adversarial attacks during on-device learning [130]. • Future communications networks.…”
Section: Future Challenges Of Edge-ai G-iot Systemsmentioning
confidence: 99%
“…• Additional mechanisms are needed to offer protection against network, physical, software, and encryption attacks. In addition, it is critical to have protection against adversarial attacks during on-device learning [130]. • Future communications networks.…”
Section: Future Challenges Of Edge-ai G-iot Systemsmentioning
confidence: 99%
“…Or agents selecting plans that maximize the inefficiency cost or even agents that arbitrary violate the execution of their selected plans. Making collective learning tolerant to Byzantine faults may require methods to identify and isolate such agents or novel collective actions by other agents to remedy the effect of adversary behavior [61].…”
Section: B Learning Resilience Against Plan Violations and Adversariesmentioning
confidence: 99%
“…FL avoids the direct raw data exchange among edge devices to alleviate the privacy concerns while collaboratively training a common model under the orchestration of a central server. However, a number of challenges arise for the practical deployment of FL, including the statistical challenges with non-IID (not independent and identically distributed) datasets across edge devices [6], [8], [9], high communication costs during the training process [6], [8], privacy and security concerns because of adversarial devices [3], [9], [10], heterogeneous devices with varying resource constraints [9], and system design issues [11], such as the unreliable device connectivity, interrupted execution and slow convergence.…”
Section: Introductionmentioning
confidence: 99%
“…The state-of-the-art research progress has been made on secure FL in presence of Byzantine devices. The authors in [10] classified the current secure FL algorithms into four categories: robust aggregation rule [10], [21], preprocess method from the information-the-oretical perspective [22], [23], model with regularization term [24], and adversarial detection [25], [26]. In this paper, we focus on the robust aggregation rule.…”
Section: Introductionmentioning
confidence: 99%