ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2019
DOI: 10.1109/icassp.2019.8683442
|View full text |Cite
|
Sign up to set email alerts
|

A Case of Distributed Optimization in Adversarial Environment

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
14
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 20 publications
(14 citation statements)
references
References 19 publications
0
14
0
Order By: Relevance
“…Ravi et al [88] analyzed possible behavior of malicious agents in the system. Suppose the malicious agents intend to manipulate its objective function, such that the output using cost functions from all agents x a will deviate from a correct output x * by a vector , i.e.…”
Section: Alternative Adversarial Modelsmentioning
confidence: 99%
“…Ravi et al [88] analyzed possible behavior of malicious agents in the system. Suppose the malicious agents intend to manipulate its objective function, such that the output using cost functions from all agents x a will deviate from a correct output x * by a vector , i.e.…”
Section: Alternative Adversarial Modelsmentioning
confidence: 99%
“…Note that all the above‐mentioned algorithms are proposed under the assumption that the communication network surrounding is benign. In fact, complex cyber‐physical networks are usually in open and hostile environments and thus suffering from cyber or physical attacks inevitably 33‐42 . Distributed optimization problems in the adversarial environment also come into the view of researchers, where partial nodes are adversarial or faulty.…”
Section: Introductionmentioning
confidence: 99%
“…Distributed optimization problems in the adversarial environment also come into the view of researchers, where partial nodes are adversarial or faulty. In the case that malicious nodes intend to drive the solutions of normal nodes away from the global minimizer, Ravi et al 36 use the trends of averaged gradient variations to detect the neighbors that are likely to be malicious, by observing that the gradient of attackers tends to be large. A fault‐tolerant optimal iterative distributed algorithm 37 is proposed under a completely connected network with at most a third of the nodes being faulty, where only two neighbors' estimations and another two neighbors' gradients are used for each node's update.…”
Section: Introductionmentioning
confidence: 99%
“…A handful of recent papers have considered this problem for the case where agent misbehavior follows prescribed patterns [6], [7]. A more general (and serious) form of misbehavior is captured by the Byzantine adversary model from computer science, where misbehaving agents can send arbitrary (and conflicting) values to their neighbors at each iteration of the algorithm.…”
Section: Introductionmentioning
confidence: 99%