2[0000−0002−2880−2887] , Evgeny Vasiliev 1[0000−0002−7949−1919] , and Vadim Turlapov 1[0000−0001−8484−0565]Abstract. MRI analysis takes central position in brain tumor diagnosis and treatment, thus it's precise evaluation is crucially important. However, it's 3D nature imposes several challenges, so the analysis is often performed on 2D projections that reduces the complexity, but increases bias. On the other hand, time consuming 3D evaluation, like, segmentation, is able to provide precise estimation of a number of valuable spatial characteristics, giving us understanding about the course of the disease. Recent studies, focusing on the segmentation task, report superior performance of Deep Learning methods compared to classical computer vision algorithms. But still, it remains a challenging problem. In this paper we present deep cascaded approach for automatic brain tumor segmentation. Similar to recent methods for object detection, our implementation is based on neural networks; we propose modifications to the 3D UNet architecture and augmentation strategy to efficiently handle multimodal MRI input, besides this we introduce approach to enhance segmentation quality with context obtained from models of the same topology operating on downscaled data. We evaluate presented approach on BraTS 2018 dataset and discuss results.
This article is devoted to searching for high-level explainable features that can remain explainable for a wide class of objects or phenomena and become an integral part of explainable AI (XAI). The present study involved a 25-day experiment on early diagnosis of wheat stress using drought stress as an example. The state of the plants was periodically monitored via thermal infrared (TIR) and hyperspectral image (HSI) cameras. A single-layer perceptron (SLP)-based classifier was used as the main instrument in the XAI study. To provide explainability of the SLP input, the direct HSI was replaced by images of six popular vegetation indices and three HSI channels (R630, G550, and B480; referred to as indices), along with the TIR image. Furthermore, in the explainability analysis, each of the 10 images was replaced by its 6 statistical features: min, max, mean, std, max–min, and the entropy. For the SLP output explainability, seven output neurons corresponding to the key states of the plants were chosen. The inner layer of the SLP was constructed using 15 neurons, including 10 corresponding to the indices and 5 reserved neurons. The classification possibilities of all 60 features and 10 indices of the SLP classifier were studied. Study result: Entropy is the earliest high-level stress feature for all indices; entropy and an entropy-like feature (max–min) paired with one of the other statistical features can provide, for most indices, 100% accuracy (or near 100%), serving as an integral part of XAI.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.