Objectives To develop a visual ensemble selection of deep convolutional neural networks (CNN) for 3D segmentation of breast tumors using T1-weighted dynamic contrast-enhanced (T1-DCE) MRI. Methods Multi-center 3D T1-DCE MRI (n = 141) were acquired for a cohort of patients diagnosed with locally advanced or aggressive breast cancer. Tumor lesions of 111 scans were equally divided between two radiologists and segmented for training. The additional 30 scans were segmented independently by both radiologists for testing. Three 3D U-Net models were trained using either post-contrast images or a combination of post-contrast and subtraction images fused at either the image or the feature level. Segmentation accuracy was evaluated quantitatively using the Dice similarity coefficient (DSC) and the Hausdorff distance (HD95) and scored qualitatively by a radiologist as excellent, useful, helpful, or unacceptable. Based on this score, a visual ensemble approach selecting the best segmentation among these three models was proposed. Results The mean and standard deviation of DSC and HD95 between the two radiologists were equal to 77.8 ± 10.0% and 5.2 ± 5.9 mm. Using the visual ensemble selection, a DSC and HD95 equal to 78.1 ± 16.2% and 14.1 ± 40.8 mm was reached. The qualitative assessment was excellent (resp. excellent or useful) in 50% (resp. 77%). Conclusion Using subtraction images in addition to post-contrast images provided complementary information for 3D segmentation of breast lesions by CNN. A visual ensemble selection allowing the radiologist to select the most optimal segmentation obtained by the three 3D U-Net models achieved comparable results to inter-radiologist agreement, yielding 77% segmented volumes considered excellent or useful. Key Points • Deep convolutional neural networks were developed using T1-weighted post-contrast and subtraction MRI to perform automated 3D segmentation of breast tumors. • A visual ensemble selection allowing the radiologist to choose the best segmentation among the three 3D U-Net models outperformed each of the three models. • The visual ensemble selection provided clinically useful segmentations in 77% of cases, potentially allowing for a valuable reduction of the manual 3D segmentation workload for the radiologist and greatly facilitating quantitative studies on non-invasive biomarker in breast MRI.
Monitoring atrial activity via P waves, is an important feature of the arrhythmia detection procedure. The aim of this paper is to present an algorithm for P wave detection in normal and some abnormal records by improving existing methods in the field of signal processing. In contrast to the classical approaches, which are completely blind to signal dynamics, our proposed method uses the extended Kalman filter, EKF25, to estimate the state variables of the equations modeling the dynamic of an ECG signal. This method is a modified version of the nonlinear dynamical model previously introduced for a generation of synthetic ECG signals and fiducial point extraction in normal ones. It is capable of estimating the separate types of activity of the heart with reasonable accuracy and performs well in the presence of morphological variations in the waveforms and ectopic beats. The MIT-BIH Arrhythmia and QT databases have been used to evaluate the performance of the proposed method. The results show that this method has Se = 98.38% and Pr = 96.74% in the overall records (considering normal and abnormal rhythms).
Skin cancer is a major public health problem, with millions newly diagnosed cases each year. Melanoma is the deadliest form of skin cancer, responsible for the most over 6500 deaths each year in the US, and the rates have been rising rapidly over years. Because of this, a lot of research is being done in automated image-based systems for skin lesion classification. In our paper we propose an automated melanoma and seborrheic keratosis recognition system, which is based on pre-trained deep network combined with structural features. We compare using different pre-trained deep networks, analyze the impact of using patient data in our approach, and evaluate our system performance with different datasets. Our results shown us that patient data has impact on characteristic curve metric value with around 2-6% and different algorithm in final classification layer has impact with around 1-4%.
In clinical practice, multi-sequence MRI protocols for brain tumor segmentation are not standardized and therefore a flexible segmentation approach is needed which makes optimal use of all available MRI data. In this study, we present and evaluate an early and late fusion Convolutional Neural Network (CNN) based on DeepMedic architecture to segment brain tumor using different combinations of multi-sequence MRI datasets. While for the early fusion approach, we trained a dedicated CNN for all possible combinations of MRI sequences, the late fusion approach is a more generic approach where we trained an independent CNN for each type of MRI sequence and merged the feature maps using a fully connected layer to generate the final segmentation. Compared to an early fusion CNN, the segmentation performance of the late fusion approach was very similar while it provides more flexibility in terms of combining all available MRI data.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.