2023
DOI: 10.1109/tnnls.2022.3169347
|View full text |Cite
|
Sign up to set email alerts
|

Collaborative Machine Learning: Schemes, Robustness, and Privacy

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
9
0

Year Published

2023
2023
2025
2025

Publication Types

Select...
6
2
2

Relationship

0
10

Authors

Journals

citations
Cited by 14 publications
(9 citation statements)
references
References 84 publications
0
9
0
Order By: Relevance
“…In collaborative ML, researchers have investigated how data science and ML practitioners may work with domain experts in different stages of the ML pipeline [5,26,30,31,38,40,43,49]. Current collaboration approaches are largely through live meetings, messaging channels, and, in a few instances, specially designed graphical interfaces [5,8,10,25,37,40].…”
Section: Related Workmentioning
confidence: 99%
“…In collaborative ML, researchers have investigated how data science and ML practitioners may work with domain experts in different stages of the ML pipeline [5,26,30,31,38,40,43,49]. Current collaboration approaches are largely through live meetings, messaging channels, and, in a few instances, specially designed graphical interfaces [5,8,10,25,37,40].…”
Section: Related Workmentioning
confidence: 99%
“…We attack some parameters randomly in those layers with the most likely two forms: ① g(θ + ϵ) ② g(θ) + ϵ, where θ, ϵ denote parameters and noises respectively. g represents activation operations in pure classical models, while it means sine or cosine functions in the hybrid models [21], [22]. The test of robustness is seperated into symmetrical noise attack with form ① and ② and unsymmetrical noise attack with form ① and ② (see supplementary materials part D for more results).…”
Section: B Robustness Testmentioning
confidence: 99%
“…FL has been widely used in various fields including medical diagnosis, financial data analysis, and cybersecurity. However, it has been shown to be vulnerable to adversarial attacks [5], [6], [7], [8], [9]. Notably, a compromised client might inject malicious model parameters into the FL system, causing malfunction and influencing other clients in the system (Figure 1a).…”
Section: Introductionmentioning
confidence: 99%