2020
DOI: 10.48550/arxiv.2003.01668
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Model Assertions for Monitoring and Improving ML Models

Abstract: ML models are increasingly deployed in settings with real world interactions such as vehicles, but unfortunately, these models can fail in systematic ways. To prevent errors, ML engineering teams monitor and continuously improve these models. We propose a new abstraction, model assertions, that adapts the classical use of program assertions as a way to monitor and improve ML models. Model assertions are arbitrary functions over a model's input and output that indicate when errors may be occurring, e.g., a func… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(6 citation statements)
references
References 32 publications
0
6
0
Order By: Relevance
“…Aside from multi-view disagreements, our framework can be easily extended to any type of assertion such as class entropy, high network loss, etc. While there exists literature about model assertions [12], we are novel in which we apply the checks during training and inference of an existing dataset while theirs does it for Active Learning.…”
Section: Assertion-guided Point Samplingmentioning
confidence: 99%
“…Aside from multi-view disagreements, our framework can be easily extended to any type of assertion such as class entropy, high network loss, etc. While there exists literature about model assertions [12], we are novel in which we apply the checks during training and inference of an existing dataset while theirs does it for Active Learning.…”
Section: Assertion-guided Point Samplingmentioning
confidence: 99%
“…Dutta et al [24] present FLEX which uses approximate assertions to compare the actual and expected values, while systematically identify the acceptable bound between the actual and expected output which minimizes flakiness. Kang et al [36] introduce model assertions -that could be 'exact' or 'soft', which adapts the classical use of program assertions as a way to monitor and improve ML models.…”
Section: Specific Applicationsmentioning
confidence: 99%
“…Corduroy [54] introduces metamorphic assertions, built on top of Java Modelling Language (JML). Model assertions [36] adapt the classical use of program assertions, tailored to the specific needs of ML programs, in particular uncertainty in output.…”
Section: Techniquementioning
confidence: 99%
“…Comparing the model's reasoning process to human expectations is a powerful tool for finding bugs, but less broadly applicable than approaches that only require level 2 access. Approaches for debugging models include model assertions [41], editing models, regularization, data augmentation, data pre-processing, prediction post-processing, and anomaly detection [34]. Among these methods, the vast majority are compatible with level 2 access.…”
Section: The Utility Of Interpretability Beyond Trustmentioning
confidence: 99%