2021
DOI: 10.48550/arxiv.2103.05633
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Proof-of-Learning: Definitions and Practice

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
3

Relationship

3
0

Authors

Journals

citations
Cited by 3 publications
(5 citation statements)
references
References 42 publications
0
5
0
Order By: Relevance
“…Weber et al [265] and Levine et al [266] both propose newer certified defenses against data poisoning; the former uses techniques similar to randomized smoothing [240] and the latter uses collective knowledge from an ensemble of models to prove robustness. Finally, integrity of the computation and model parameter values can also be verified by a mechanism that provides a proof associated with the particular run of gradient descent which produced the model parameters, as suggested by Jia et al [151].…”
Section: B Assurance For Trainingmentioning
confidence: 98%
See 2 more Smart Citations

SoK: Machine Learning Governance

Chandrasekaran,
Jia,
Thudi
et al. 2021
Preprint
Self Cite
“…Weber et al [265] and Levine et al [266] both propose newer certified defenses against data poisoning; the former uses techniques similar to randomized smoothing [240] and the latter uses collective knowledge from an ensemble of models to prove robustness. Finally, integrity of the computation and model parameter values can also be verified by a mechanism that provides a proof associated with the particular run of gradient descent which produced the model parameters, as suggested by Jia et al [151].…”
Section: B Assurance For Trainingmentioning
confidence: 98%
“…If a trusted third party can reproduce these states (within some acceptable tolerance threshold) with the help of the honest principal's secret information, then the honest principal's ownership of the model is validated. This is the premise of the proof-oflearning approach to enable ownership [151]. Call to action: First, notice that claiming model ownership and differentiating between two models are two sides of the same coin.…”
Section: B Model Ownershipmentioning
confidence: 99%
See 1 more Smart Citation

SoK: Machine Learning Governance

Chandrasekaran,
Jia,
Thudi
et al. 2021
Preprint
Self Cite
“…Reactive defenses are post-hoc methods that are used to determine if a suspect model was stolen or not. The examples of such methods are watermarking (Jia et al, 2020), dataset inference (Maini et al, 2021), and Proof-of-Learning (PoL) (Jia et al, 2021)…”
Section: Defenses Against Model Extractionmentioning
confidence: 99%
“…If one could obtain M , then they could avoid approximate unlearning altogether and use that as the unlearned model. Another issue is that one can retrain on the same sequence of data from the same initialization and obtain different terminal weights [20] (which is caused by randomness and numerical instabilities in floating point operations).…”
Section: Unlearning Metricsmentioning
confidence: 99%