Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence 2019
DOI: 10.24963/ijcai.2019/670
|View full text |Cite
|
Sign up to set email alerts
|

FABA: An Algorithm for Fast Aggregation against Byzantine Attacks in Distributed Neural Networks

Abstract: Many times, training a large scale deep learning neural network on a single machine becomes more and more difficult for a complex network model. Distributed training provides an efficient solution, but Byzantine attacks may occur on participating workers. They may be compromised or suffer from hardware failures. If they upload poisonous gradients, the training will become unstable or even converge to a saddle point. In this paper, we propose FABA, a Fast Aggregation algorithm against Byzantine Attacks, which r… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

2
19
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 56 publications
(21 citation statements)
references
References 6 publications
2
19
0
Order By: Relevance
“…A trusted execution environment (TEE) is a secure execution environment (mostly connected hardware components) which guarantees code and data loaded inside to be protected with respect to confidentiality and integrity [17]. Although dedicated TEEs for AI systems have been widely studied [87]- [90], the mechanisms for constructing secured environments are broader than just AI dedicated ones. Non-AI system dedicated defense mechanisms could be applied to protect trained machine learning models as the TEE in the testing phase [91].…”
Section: B Defense Mechanisms For Testing Nodesmentioning
confidence: 99%
“…A trusted execution environment (TEE) is a secure execution environment (mostly connected hardware components) which guarantees code and data loaded inside to be protected with respect to confidentiality and integrity [17]. Although dedicated TEEs for AI systems have been widely studied [87]- [90], the mechanisms for constructing secured environments are broader than just AI dedicated ones. Non-AI system dedicated defense mechanisms could be applied to protect trained machine learning models as the TEE in the testing phase [91].…”
Section: B Defense Mechanisms For Testing Nodesmentioning
confidence: 99%
“…1 Label flipping: The attacker ''flips'' the labels of its training data to arbitrary ones (e.g., via a permutation function). 2 Adding noise: An attacker contaminates the dataset by adding noises to degrade the quality of models. 3 Backdoor trigger: An attacker injects a trigger into a small area of the original dataset to cause the classifier misclassifying into the target category.…”
Section: B Byzantine Attackmentioning
confidence: 99%
“…There are two ways to modify the parameters: 1 Modifying the direction and size of the parameter learned from the local dataset, e.g., flipping the signs of local iterates and gradients, or enlarging the magnitudes. 2 Modifying the parameter directly, e.g., randomly sampling a number from the Gaussian distribution and treating it as one of the parameters of the local model.…”
Section: B Byzantine Attackmentioning
confidence: 99%
See 2 more Smart Citations