2022 ACM Conference on Fairness, Accountability, and Transparency 2022
DOI: 10.1145/3531146.3533179
|View full text |Cite
|
Sign up to set email alerts
|

The Road to Explainability is Paved with Bias: Measuring the Fairness of Explanations

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
9
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
3

Relationship

1
8

Authors

Journals

citations
Cited by 31 publications
(9 citation statements)
references
References 54 publications
0
9
0
Order By: Relevance
“…For example, the U.S. Food and Drug Administration (FDA) recently recommended that the deployment of any AI-based medical device used to inform human decisions must address "human factors considerations and the human interpretability of model inputs" 35 . While increasing model interpretability is an appealing approach to humans, existing approaches to interpretability and explainability are poorly suited to health care 36 , may decrease human ability to identify model mistakes 7 , and increase model bias (i.e., the gap in model performance between the worst and best subgroup) 37 . Any successful deployment must thus rigorously test and validate several human-AI recommendation styles to ensure that AI systems are substantially improving decision making.…”
Section: Discussionmentioning
confidence: 99%
“…For example, the U.S. Food and Drug Administration (FDA) recently recommended that the deployment of any AI-based medical device used to inform human decisions must address "human factors considerations and the human interpretability of model inputs" 35 . While increasing model interpretability is an appealing approach to humans, existing approaches to interpretability and explainability are poorly suited to health care 36 , may decrease human ability to identify model mistakes 7 , and increase model bias (i.e., the gap in model performance between the worst and best subgroup) 37 . Any successful deployment must thus rigorously test and validate several human-AI recommendation styles to ensure that AI systems are substantially improving decision making.…”
Section: Discussionmentioning
confidence: 99%
“…Second, it has also remained unknown whether AIbased explanations exhibit a bias, where their reliability differs across surgeon sub-cohorts. Although preliminary studies have begun to explore the intersection of bias and explanations 26,27,43 , they do not leverage human expert explanations, are limited to non-surgical domains, and do not present findings for video-based AI systems. Third, the development of a strategy that consistently improves the reliability and fairness of explanations has been underexplored.…”
Section: Discussionmentioning
confidence: 99%
“…However, these studies remain qualitative and thereby do not systematically investigate whether explanations are consistently reliable across data points. Studies that quantitatively evaluate AI-based explanations often exclude a comparison to human explanations 24,25 , a drawback that extends to the preliminary studies aimed at also assessing the fairness of such explanations 26,27 . Notably, previous work has not quantitatively compared AI-based explanations to human explanations in the context of surgical videos, nor has it proposed a strategy to enhance the reliability and fairness of such explanations.…”
mentioning
confidence: 99%
“…The concept of technical robustness is an important cornerstone for ensuring Trustworthy AI. The improvement of balanced and robust training techniques and datasets can enhance not only fairness (see Section 4.4) but also explainability (see Section 4.2) [110].…”
Section: Technical Robustness and Generalizationmentioning
confidence: 99%