Deep learning has recently gained popularity in digital pathology due to its high prediction quality. However, the medical domain requires explanation and insight for a better understanding beyond standard quantitative performance evaluation. Recently, explanation methods have emerged, which are so far still rarely used in medicine. This work shows their application to generate heatmaps that allow to resolve common challenges encountered in deep learning-based digital histopathology analyses. These challenges comprise biases typically inherent to histopathology data. We study binary classification tasks of tumor tissue discrimination in publicly available haematoxylin and eosin slides of various tumor entities and investigate three types of biases:(1) biases which affect the entire dataset, (2) biases which are by chance correlated with class labels and (3) sampling biases. While standard analyses focus on patch-level evaluation, we advocate pixel-wise heatmaps, which offer a more precise and versatile diagnostic instrument and furthermore help to reveal biases in the data. This insight is shown to not only detect but also to be helpful to remove the effects of common hidden biases, which improves generalization within and across datasets. For example, we could see a trend of improved area under the receiver operating characteristic curve by 5% when reducing a labeling bias. Explanation techniques are thus demonstrated to be a helpful and highly relevant tool for the development and the deployment phases within the life cycle of real-world applications in digital pathology. Related work Models in digital pathologySimilar to the recent trend in computer vision, where end-to-end training with deep learning clearly prevails classification of handcrafted features, an increased use of deep learning, e.g convolutional neural networks (CNN) is also noticeable in digital pathology. Nonetheless, there are some works on combining support vector machines with image feature algorithms 21, 23-25 . Meanwhile, while some works propose custom-designed networks 26, 27 , e. g. spatially constrained, locality sensitive CNN for the detection of nuclei in histopathological images 26 , most often standard deep learning architectures (e. g. AlexNet 28 , GoogLeNet 3 , ResNet 29 ) as well as hybrids are used for digital pathology 22, 30-33 . According to 6 , currently the most common architecture is the GoogLeNet Inception-V3 model. Interpretability in computational pathologyAs discussed above, more and more developments have emerged that introduce the possibility of explanation (e. g. [11][12][13][14][15][16][17][18] ; for a summary of implementations see 34 ); few of which have been applied in digital pathology 21,22,30,[35][36][37][38][39][40] . The visualization of a support vector machine's decision on Bag-of-Visual-Words features in a histopathological discrimination task is explored in 21 . The authors present an explanatory approach for evidence of tumor and lymphocytes in H&E images as well as for molecular properties which-unli...
The extent of tumor-infiltrating lymphocytes (TILs), along with immunomodulatory ligands, tumor-mutational burden and other biomarkers, has been demonstrated to be a marker of response to immune-checkpoint therapy in several cancers. Pathologists have therefore started to devise standardized visual approaches to quantify TILs for therapy prediction. However, despite successful standardization efforts visual TIL estimation is slow, with limited precision and lacks the ability to evaluate more complex properties such as TIL distribution patterns. Therefore, computational image analysis approaches are needed to provide standardized and efficient TIL quantification. Here, we discuss different automated TIL scoring approaches ranging from classical image segmentation, where cell boundaries are identified and the resulting objects classified according to shape properties, to machine learning-based approaches that directly classify cells without segmentation but rely on large amounts of training data. In contrast to conventional machine learning (ML) approaches that are often criticized for their "black-box" characteristics, we also discuss explainable machine learning. Such approaches render ML results interpretable and explain the computational decision-making process through high-resolution heatmaps that highlight TILs and cancer cells and therefore allow for quantification and plausibility checks in biomedical research and diagnostics.
Head and neck squamous cell carcinoma (HNSC) patients are at risk of suffering from both pulmonary metastases or a second squamous cell carcinoma of the lung (LUSC). Differentiating pulmonary metastases from primary lung cancers is of high clinical importance, but not possible in most cases with current diagnostics. To address this, we performed DNA methylation profiling of primary tumors and trained three different machine learning methods to distinguish metastatic HNSC from primary LUSC. We developed an artificial neural network that correctly classified 96.4% of the cases in a validation cohort of 279 patients with HNSC and LUSC as well as normal lung controls, outperforming support vector machines (95.7%) and random forests (87.8%). Prediction accuracies of more than 99% were achieved for 92.1% (neural network), 90% (support vector machine), and 43% (random forest) of these cases by applying thresholds to the resulting probability scores and excluding samples with low confidence. As independent clinical validation of the approach, we analyzed a series of 51 patients with a history of HNSC and a second lung tumor, demonstrating the correct classifications based on clinicopathological properties. In summary, our approach may facilitate the reliable diagnostic differentiation of pulmonary metastases of HNSC from primary LUSC to guide therapeutic decisions.
BackgroundDespite modern pharmacotherapy and advanced implantable cardiac devices, overall prognosis and quality of life of HF patients remain poor. This is in part due to insufficient patient stratification and lack of individualized therapy planning, resulting in less effective treatments and a significant number of non-responders.Methods and ResultsState-of-the-art clinical phenotyping was acquired, including magnetic resonance imaging (MRI) and biomarker assessment. An individualized, multi-scale model of heart function covering cardiac anatomy, electrophysiology, biomechanics and hemodynamics was estimated using a robust framework. The model was computed on n=46 HF patients, showing for the first time that advanced multi-scale models can be fitted consistently on large cohorts. Novel multi-scale parameters derived from the model of all cases were analyzed and compared against clinical parameters, cardiac imaging, lab tests and survival scores to evaluate the explicative power of the model and its potential for better patient stratification. Model validation was pursued by comparing clinical parameters that were not used in the fitting process against model parameters.ConclusionThis paper illustrates how advanced multi-scale models can complement cardiovascular imaging and how they could be applied in patient care. Based on obtained results, it becomes conceivable that, after thorough validation, such heart failure models could be applied for patient management and therapy planning in the future, as we illustrate in one patient of our cohort who received CRT-D implantation.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.