2022
DOI: 10.3390/s22145346
|View full text |Cite
|
Sign up to set email alerts
|

Validating Automatic Concept-Based Explanations for AI-Based Digital Histopathology

Abstract: Digital histopathology poses several challenges such as label noise, class imbalance, limited availability of labelled data, and several latent biases to deep learning, negatively influencing transparency, reproducibility, and classification performance. In particular, biases are well known to cause poor generalization. Proposed tools from explainable artificial intelligence (XAI), bias detection, and bias discovery suffer from technical challenges, complexity, unintuitive usage, inherent biases, or a semantic… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5

Relationship

1
4

Authors

Journals

citations
Cited by 9 publications
(2 citation statements)
references
References 82 publications
(160 reference statements)
0
2
0
Order By: Relevance
“…The inability to understand how an AI system arrives at its decision has been termed the “black box problem.” Without explainability, can providers and researchers truly educate patients such that they can give informed consent? Sauter et al 52 investigated the utility of automated concept-based explanation in comprehending CNN image analysis wherein results showed that automated concept-based explanation could be an effective tool for analyzing CNN decision-making and enhancing researcher/physician diagnostic and treatment decisions while improving the transparency of CNNs. When mechanisms are properly understood and data sets are used responsibly, AI could optimize pathology workflows, enhance diagnostic accuracy, help the provider engage in prognostic prediction, and find individualized treatments.…”
Section: Discussionmentioning
confidence: 99%
“…The inability to understand how an AI system arrives at its decision has been termed the “black box problem.” Without explainability, can providers and researchers truly educate patients such that they can give informed consent? Sauter et al 52 investigated the utility of automated concept-based explanation in comprehending CNN image analysis wherein results showed that automated concept-based explanation could be an effective tool for analyzing CNN decision-making and enhancing researcher/physician diagnostic and treatment decisions while improving the transparency of CNNs. When mechanisms are properly understood and data sets are used responsibly, AI could optimize pathology workflows, enhance diagnostic accuracy, help the provider engage in prognostic prediction, and find individualized treatments.…”
Section: Discussionmentioning
confidence: 99%
“…A significant advantage of ACE, compared to TCAV, is that it mitigates the need for human supervision, as manual labeling is not required. However, the method relies on the manual setting of parameters, which can affect the outcome, e.g., resulting in duplicated or mixed concepts [25].…”
Section: Case-based Explanationsmentioning
confidence: 99%