2024
DOI: 10.1109/tnnls.2023.3288577
|View full text |Cite
|
Sign up to set email alerts
|

Random Polynomial Neural Networks: Analysis and Design

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 38 publications
0
1
0
Order By: Relevance
“…For example, given the local update gradients (or local models), a malicious server can infer whether a specific data point belongs to the training set of a particular client [12]. Moreover, an attacker in the open public environment can precisely reconstruct small batches of training samples [13][14][15], if the updated gradients are intercepted and the local model architectures are known. Such attacks pose a server privacy threat to FL systems.…”
Section: Introductionmentioning
confidence: 99%
“…For example, given the local update gradients (or local models), a malicious server can infer whether a specific data point belongs to the training set of a particular client [12]. Moreover, an attacker in the open public environment can precisely reconstruct small batches of training samples [13][14][15], if the updated gradients are intercepted and the local model architectures are known. Such attacks pose a server privacy threat to FL systems.…”
Section: Introductionmentioning
confidence: 99%