2021 International Conference on Computer Communications and Networks (ICCCN) 2021
DOI: 10.1109/icccn52240.2021.9522327
|View full text |Cite
|
Sign up to set email alerts
|

Efficient Communication Topology via Partially Differential Privacy for Decentralized Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(1 citation statement)
references
References 22 publications
0
1
0
Order By: Relevance
“…In spite of the simplicity of the centralized server-based model fine-tuning, the entire aggregation tree easily suffer from communication and computational bottlenecks that happen to each member node [24], especially in a mobile social network, where the large-scale quantity of news and the rapid quantity of data are two critical characteristics of MSNs. Moreover, to secure the user data from the model inversion attacks that reproduce the data for model fine-tuning [12,14,15], some security-intensive applications incorporate random noises into model fine-tuning [6,12]. Such a random noise is proved to pose negative impact on predictive performance.…”
Section: Online Model Fine-tuning and Ensemble-based Model Inferencementioning
confidence: 99%
“…In spite of the simplicity of the centralized server-based model fine-tuning, the entire aggregation tree easily suffer from communication and computational bottlenecks that happen to each member node [24], especially in a mobile social network, where the large-scale quantity of news and the rapid quantity of data are two critical characteristics of MSNs. Moreover, to secure the user data from the model inversion attacks that reproduce the data for model fine-tuning [12,14,15], some security-intensive applications incorporate random noises into model fine-tuning [6,12]. Such a random noise is proved to pose negative impact on predictive performance.…”
Section: Online Model Fine-tuning and Ensemble-based Model Inferencementioning
confidence: 99%