Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society 2021
DOI: 10.1145/3461702.3462571
|View full text |Cite
|
Sign up to set email alerts
|

Uncertainty as a Form of Transparency: Measuring, Communicating, and Using Uncertainty

Abstract: Transparency of algorithmic systems entails exposing system properties to various stakeholders for purposes that include understanding, improving, and/or contesting predictions. The machine learning (ML) community has mostly considered explainability as a proxy for transparency. With this work, we seek to encourage researchers to study uncertainty as a form of transparency and practitioners to communicate uncertainty estimates to stakeholders. First, we discuss methods for assessing uncertainty. Then, we descr… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
79
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3
2

Relationship

2
7

Authors

Journals

citations
Cited by 146 publications
(101 citation statements)
references
References 104 publications
(95 reference statements)
0
79
0
Order By: Relevance
“…Concerning auditing, recent work underscores the importance of being able to analyze algorithmic outputs to detect and correct for the harm of unfair discrimination [4,110]. Transparency tends to be treated as a property of models, particularly with regard to whether a model is interpretable or explainable to relevant stakeholders [14,40,52]. 2 More recently, computational work has begun to take a more expansive view of transparency, applying it to other parts of the ML pipeline, such as problem formulation, data provenance, and model selection choices [50,50,81,121,122].…”
Section: Contemporary Interventions In Accountability and Data-driven...mentioning
confidence: 99%
See 1 more Smart Citation
“…Concerning auditing, recent work underscores the importance of being able to analyze algorithmic outputs to detect and correct for the harm of unfair discrimination [4,110]. Transparency tends to be treated as a property of models, particularly with regard to whether a model is interpretable or explainable to relevant stakeholders [14,40,52]. 2 More recently, computational work has begun to take a more expansive view of transparency, applying it to other parts of the ML pipeline, such as problem formulation, data provenance, and model selection choices [50,50,81,121,122].…”
Section: Contemporary Interventions In Accountability and Data-driven...mentioning
confidence: 99%
“…In short, satisfying mathematical guarantees can serve as a scapegoat because pointing to mathematical claims satisfied at the model-level can serve to obscure the need to account for harms that occur at the individual-decision level. 14 Non-determinism. Data-driven algorithmic systems that involve ML exhibit non-determinism.…”
Section: The Computer As Scapegoatmentioning
confidence: 99%
“…An emerging challenge with using NNs for anomaly detection is the quantification of uncertainty [20]. Uncertainty is a core component for promoting algorithmic transparency and hence, the advancement of trustworthy ML [22].…”
Section: Background and Related Workmentioning
confidence: 99%
“…Depending on user needs, it may be more critical to provide transparent information about a model's capabilities, limitations (e.g. uncertainty [10]) or provenance [8]. Moreover, users may need additional information beyond algorithmic explanations to fill in gaps of understanding.…”
Section: Introductionmentioning
confidence: 99%