Recently, deep learning technology has achieved various successes in medical image analysis studies including computer-aided diagnosis (CADx). However, current CADx approaches based on deep learning have a limitation in interpreting diagnostic decisions. The limited interpretability is a major challenge for practical use of current deep learning approaches. In this paper, a novel visually interpretable deep network framework is proposed to provide diagnostic decisions with visual interpretation. The proposed method is motivated by the fact that the radiologists characterize breast masses according to the breast imaging reporting and data system (BIRADS). The proposed deep network framework consists of a BIRADS guided diagnosis network and a BIRADS critic network. A 2D map, named BIRADS guide map, is generated in the inference process of the deep network. The visual features extracted from the breast masses could be refined by the BIRADS guide map, which helps the deep network to focus on more informative areas. The BIRADS critic network makes the BIRADS guide map to be relevant to the characterization of masses in terms of BIRADS description. To verify the proposed method, comparative experiments have been conducted on public mammogram database. On the independent test set (170 malignant masses and 170 benign masses), the proposed method was found to have significantly higher performance compared to the deep network approach without using the BIRADS guide map (p < 0.05). Moreover, the visualization was conducted to show the location where the deep network exploited more information. This study demonstrated that the proposed visually interpretable CADx framework could be a promising approach for visually interpreting the diagnostic decision of the deep network.
The ambiguity of the decision-making process has been pointed out as the main obstacle to applying the deep learningbased method in a practical way in spite of its outstanding performance. Interpretability could guarantee the confidence of deep learning system, therefore it is particularly important in the medical field. In this study, a novel deep network is proposed to explain the diagnostic decision with visual pointing map and diagnostic sentence justifying result simultaneously. For the purpose of increasing the accuracy of sentence generation, a visual word constraint model is devised in training justification generator. To verify the proposed method, comparative experiments were conducted on the problem of the diagnosis of breast masses. Experimental results demonstrated that the proposed deep network could explain diagnosis more accurately with various textual justifications.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.