2023
DOI: 10.1002/cjp2.322
|View full text |Cite
|
Sign up to set email alerts
|

Explainability and causability in digital pathology

Abstract: The current move towards digital pathology enables pathologists to use artificial intelligence (AI)‐based computer programmes for the advanced analysis of whole slide images. However, currently, the best‐performing AI algorithms for image analysis are deemed black boxes since it remains – even to their developers – often unclear why the algorithm delivered a particular result. Especially in medicine, a better understanding of algorithmic decisions is essential to avoid mistakes and adverse effects on patients.… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
16
0

Year Published

2023
2023
2025
2025

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 26 publications
(16 citation statements)
references
References 47 publications
0
16
0
Order By: Relevance
“…We do not know what features in the pathological sections make the model final, which is a problem for clinicians. To solve the problem of explainability of models, the development of explainable computational pathology combined with explainable AI in recent years may be an important direction for future research ( 39 ).…”
Section: Discussionmentioning
confidence: 99%
“…We do not know what features in the pathological sections make the model final, which is a problem for clinicians. To solve the problem of explainability of models, the development of explainable computational pathology combined with explainable AI in recent years may be an important direction for future research ( 39 ).…”
Section: Discussionmentioning
confidence: 99%
“…And, as there is still a large gap between ML models' pattern recognition and human-level concept learning, 118 the ability to understand all the concepts included in diagnostic/classification criteria may be beyond current ML models' capabilities. Noteworthily, concept-based explainable methods 119,120 could facilitate a rigorous assessment of this capability by pathologists' in the future.…”
Section: Expanding Recognition Capabilities Of ML Modelsmentioning
confidence: 99%
“…The importance of proving the scientific validity, analytical capability, and clinical effectiveness of AI applications in compliance with legal requirements is covered. The necessity for successful explanations that reach a certain level of causal knowledge in a particular context is highlighted by the introduction of the idea of causability as a metric of the usefulness of explanations in human AI interaction [75].…”
Section: Reverse Engineeringmentioning
confidence: 99%