2021
DOI: 10.1101/2021.08.16.456518
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Deep Learning Features Encode Interpretable Morphologies within Histological Images

Abstract: Convolutional neural networks (CNNs) are revolutionizing digital pathology by enabling machine learning-based classification of a variety of phenotypes from hematoxylin and eosin (H&E) whole slide images (WSIs), but the interpretation of CNNs remains difficult. Most studies have considered interpretability in a post hoc fashion, e.g. by presenting example regions with strongly predicted class labels. However, such an approach does not explain the biological features that contribute to correct predictions. … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 46 publications
0
2
0
Order By: Relevance
“…These features may describe the images in more detail. In future study, we will combine abstract features such as deep features [ 78 ] to further supplement image descriptions, explore genes related to these features, and conduct joint analysis with our findings.…”
Section: Discussionmentioning
confidence: 99%
“…These features may describe the images in more detail. In future study, we will combine abstract features such as deep features [ 78 ] to further supplement image descriptions, explore genes related to these features, and conduct joint analysis with our findings.…”
Section: Discussionmentioning
confidence: 99%
“…5A). To interpret the information encoded by Inception that drives classification performance (Foroughi Pour et al 2022), we examined feature 1895, the feature with highest variable importance in the random forest model (Fig. S7).…”
Section: An Hande-based Classifier Identifies Xenograft-transplant Ly...mentioning
confidence: 99%