In this paper, we present our system for the RSNA Intracranial Hemorrhage Detection challenge, which is based on the RSNA 2019 Brain CT Hemorrhage dataset. The proposed system is based on a lightweight deep neural network architecture composed of a convolutional neural network (CNN) that takes as input individual CT slices, and a Long Short-Term Memory (LSTM) network that takes as input multiple feature embeddings provided by the CNN. For efficient processing, we consider various feature selection methods to produce a subset of useful CNN features for the LSTM. Furthermore, we reduce the CT slices by a factor of 2×, which enables us to train the model faster. Even if our model is designed to balance speed and accuracy, we report a weighted mean log loss of 0.04989 on the final test set, which places us in the top 30 ranking (2%) from a total of 1345 participants. While our computing infrastructure does not allow it, processing CT slices at their original scale is likely to improve performance. In order to enable others to reproduce our results, we provide our code as open source. After the challenge, we conducted a subjective intracranial hemorrhage detection assessment by radiologists, indicating that the performance of our deep model is on par with that of doctors specialized in reading CT scans. Another contribution of our work is to integrate Grad-CAM visualizations in our system, providing useful explanations for its predictions. We therefore consider our system as a viable option when a fast diagnosis or a second opinion on intracranial hemorrhage detection are needed.
Computed Tomography (CT) scanners that are commonly-used in hospitals and medical centers nowadays produce low-resolution images, e.g. one voxel in the image corresponds to at most onecubic millimeter of tissue. In order to accurately segment tumors and make treatment plans, radiologists and oncologists need CT scans of higher resolution. The same problem appears in Magnetic Resonance Imaging (MRI). In this paper, we propose an approach for the single-image super-resolution of 3D CT or MRI scans. Our method is based on deep convolutional neural networks (CNNs) composed of 10 convolutional layers and an intermediate upscaling layer that is placed after the first 6 convolutional layers. Our first CNN, which increases the resolution on two axes (width and height), is followed by a second CNN, which increases the resolution on the third axis (depth). Different from other methods, we compute the loss with respect to the ground-truth high-resolution image right after the upscaling layer, in addition to computing the loss after the last convolutional layer. The intermediate loss forces our network to produce a better output, closer to the ground-truth. A widely-used approach to obtain sharp results is to add Gaussian blur using a fixed standard deviation. In order to avoid overfitting to a fixed standard deviation, we apply Gaussian smoothing with various standard deviations, unlike other approaches. We evaluate the proposed method in the context of 2D and 3D super-resolution of CT and MRI scans from two databases, comparing it to related works from the literature and baselines based on various interpolation schemes, using 2× and 4× scaling factors. The empirical study shows that our approach attains superior results to all other methods. Moreover, our subjective image quality assessment by human observers reveals that both doctors and regular annotators chose our method in favor of Lanczos interpolation in 97.55% cases for an upscaling factor of 2× and in 96.69% cases for an upscaling factor of 4×. In order to allow others to reproduce our state-of-the-art results, we provide our code as open source at https://github.com/lilygeorgescu/3d-super-res-cnn. INDEX TERMSConvolutional neural networks, single-image super-resolution, CT images, MRI images, medical image super-resolution.
It is already known that electrostatic, magnetostatic, extremely low-frequency electric fields, and pulsed electric field could be utilized in cancer treatment. The healing effect depends on frequency and amplitude of electric field. In the present work, a simple theoretical model is developed to estimate the intensity of electrostatic field that damages a living cell during division. By this model, it is shown that magnification of electric field in the bottleneck of dividing cell is enough to break chemical bounds between molecules by an avalanche process. Our model shows that the externally applied electric field of 4 V/cm intensity is able to hurt a cancer cell at the dividing stage.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.