2020
DOI: 10.48550/arxiv.2010.01792
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Can we Generalize and Distribute Private Representation Learning?

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2020
2020
2020
2020

Publication Types

Select...
1
1

Relationship

1
1

Authors

Journals

citations
Cited by 2 publications
(4 citation statements)
references
References 0 publications
0
4
0
Order By: Relevance
“…n /D (k) . Then, the cumulative average of the global loss function gradient for PSL satisfies (22). Proof.…”
Section: T)(26)mentioning
confidence: 99%
See 2 more Smart Citations
“…n /D (k) . Then, the cumulative average of the global loss function gradient for PSL satisfies (22). Proof.…”
Section: T)(26)mentioning
confidence: 99%
“…Also, economic incentives (e.g., rewards, gas credit, or cash back) can also facilitate data sharing in many applications such as autonomous driving. Further, there exists emerging research on privacy preserving representation learning, which aims to obfuscate the sensitive attributes of data, making the data shareable without privacy concerns [21], [22]. These will make the data sharing a viable method in the near future.…”
Section: B Fedl Shortcomings and Solution Overviewmentioning
confidence: 99%
See 1 more Smart Citation
“…Another closely related line of work is concerned with individual samples' privacy; this is relevant in cases where the samples contain highly sensitive information and they need to be kept secret even from the ML model. Examples of privacy-preserving ML include works based on differential privacy such as [22][23][24] and privacy-preserving learning such as [25][26][27][28][29][30]. A major distinction between this line of work and machine unlearning is that samples do not need to be kept private in machine unlearning, but the requests of unlearning need to be honored.…”
Section: Related Workmentioning
confidence: 99%

Coded Machine Unlearning

Aldaghri,
Mahdavifar,
Beirami
2020
Preprint