Deep learning has recently gained popularity in digital pathology due to its high prediction quality. However, the medical domain requires explanation and insight for a better understanding beyond standard quantitative performance evaluation. Recently, explanation methods have emerged, which are so far still rarely used in medicine. This work shows their application to generate heatmaps that allow to resolve common challenges encountered in deep learning-based digital histopathology analyses. These challenges comprise biases typically inherent to histopathology data. We study binary classification tasks of tumor tissue discrimination in publicly available haematoxylin and eosin slides of various tumor entities and investigate three types of biases:(1) biases which affect the entire dataset, (2) biases which are by chance correlated with class labels and (3) sampling biases. While standard analyses focus on patch-level evaluation, we advocate pixel-wise heatmaps, which offer a more precise and versatile diagnostic instrument and furthermore help to reveal biases in the data. This insight is shown to not only detect but also to be helpful to remove the effects of common hidden biases, which improves generalization within and across datasets. For example, we could see a trend of improved area under the receiver operating characteristic curve by 5% when reducing a labeling bias. Explanation techniques are thus demonstrated to be a helpful and highly relevant tool for the development and the deployment phases within the life cycle of real-world applications in digital pathology. Related work Models in digital pathologySimilar to the recent trend in computer vision, where end-to-end training with deep learning clearly prevails classification of handcrafted features, an increased use of deep learning, e.g convolutional neural networks (CNN) is also noticeable in digital pathology. Nonetheless, there are some works on combining support vector machines with image feature algorithms 21, 23-25 . Meanwhile, while some works propose custom-designed networks 26, 27 , e. g. spatially constrained, locality sensitive CNN for the detection of nuclei in histopathological images 26 , most often standard deep learning architectures (e. g. AlexNet 28 , GoogLeNet 3 , ResNet 29 ) as well as hybrids are used for digital pathology 22, 30-33 . According to 6 , currently the most common architecture is the GoogLeNet Inception-V3 model. Interpretability in computational pathologyAs discussed above, more and more developments have emerged that introduce the possibility of explanation (e. g. [11][12][13][14][15][16][17][18] ; for a summary of implementations see 34 ); few of which have been applied in digital pathology 21,22,30,[35][36][37][38][39][40] . The visualization of a support vector machine's decision on Bag-of-Visual-Words features in a histopathological discrimination task is explored in 21 . The authors present an explanatory approach for evidence of tumor and lymphocytes in H&E images as well as for molecular properties which-unli...
The extent of tumor-infiltrating lymphocytes (TILs), along with immunomodulatory ligands, tumor-mutational burden and other biomarkers, has been demonstrated to be a marker of response to immune-checkpoint therapy in several cancers. Pathologists have therefore started to devise standardized visual approaches to quantify TILs for therapy prediction. However, despite successful standardization efforts visual TIL estimation is slow, with limited precision and lacks the ability to evaluate more complex properties such as TIL distribution patterns. Therefore, computational image analysis approaches are needed to provide standardized and efficient TIL quantification. Here, we discuss different automated TIL scoring approaches ranging from classical image segmentation, where cell boundaries are identified and the resulting objects classified according to shape properties, to machine learning-based approaches that directly classify cells without segmentation but rely on large amounts of training data. In contrast to conventional machine learning (ML) approaches that are often criticized for their "black-box" characteristics, we also discuss explainable machine learning. Such approaches render ML results interpretable and explain the computational decision-making process through high-resolution heatmaps that highlight TILs and cancer cells and therefore allow for quantification and plausibility checks in biomedical research and diagnostics.
In recent years, deep neural networks have revolutionized many application domains of machine learning and are key components of many critical decision or predictive processes. Therefore, it is crucial that domain specialists can understand and analyze actions and predictions, even of the most complex neural network architectures. Despite these arguments neural networks are often treated as black boxes. In the attempt to alleviate this shortcoming many analysis methods were proposed, yet the lack of reference implementations often makes a systematic comparison between the methods a major effort. The presented library iNNvestigate addresses this by providing a common interface and out-of-thebox implementation for many analysis methods, including the reference implementation for PatternNet and PatternAttribution as well as for LRP-methods. To demonstrate the versatility of iNNvestigate, we provide an analysis of image classifications for variety of state-of-the-art neural network architectures.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.