In the application of deep learning on optical coherence tomography (OCT) data, it is common to train classification networks using 2D images originating from volumetric data. Given the micrometer resolution of OCT systems, consecutive images are often very similar in both visible structures and noise. Thus, an inappropriate data split can result in overlap between the training and testing sets, with a large portion of the literature overlooking this aspect. In this study, the effect of improper dataset splitting on model evaluation is demonstrated for three classification tasks using three OCT open-access datasets extensively used, Kermany’s and Srinivasan’s ophthalmology datasets, and AIIMS breast tissue dataset. Results show that the classification performance is inflated by 0.07 up to 0.43 in terms of Matthews Correlation Coefficient (accuracy: 5% to 30%) for models tested on datasets with improper splitting, highlighting the considerable effect of dataset handling on model evaluation. This study intends to raise awareness on the importance of dataset splitting given the increased research interest in implementing deep learning on OCT data.
Effective, robust, and automatic tools for brain tumor segmentation are needed for the extraction of information useful in treatment planning. Recently, convolutional neural networks have shown remarkable performance in the identification of tumor regions in magnetic resonance (MR) images. Context-aware artificial intelligence is an emerging concept for the development of deep learning applications for computer-aided medical image analysis. A large portion of the current research is devoted to the development of new network architectures to improve segmentation accuracy by using context-aware mechanisms. In this work, it is investigated whether or not the addition of contextual information from the brain anatomy in the form of white matter (WM), gray matter (GM), and cerebrospinal fluid (CSF) masks and probability maps improves U-Net-based brain tumor segmentation. The BraTS2020 dataset was used to train and test two standard 3D U-Net (nnU-Net) models that, in addition to the conventional MR image modalities, used the anatomical contextual information as extra channels in the form of binary masks (CIM) or probability maps (CIP). For comparison, a baseline model (BLM) that only used the conventional MR image modalities was also trained. The impact of adding contextual information was investigated in terms of overall segmentation accuracy, model training time, domain generalization, and compensation for fewer MR modalities available for each subject. Median (mean) Dice scores of 90.2 (81.9), 90.2 (81.9), and 90.0 (82.1) were obtained on the official BraTS2020 validation dataset (125 subjects) for BLM, CIM, and CIP, respectively. Results show that there is no statistically significant difference when comparing Dice scores between the baseline model and the contextual information models (p > 0.05), even when comparing performances for high and low grade tumors independently. In a few low grade cases where improvement was seen, the number of false positives was reduced. Moreover, no improvements were found when considering model training time or domain generalization. Only in the case of compensation for fewer MR modalities available for each subject did the addition of anatomical contextual information significantly improve (p < 0.05) the segmentation of the whole tumor. In conclusion, there is no overall significant improvement in segmentation performance when using anatomical contextual information in the form of either binary WM, GM, and CSF masks or probability maps as extra channels.
To investigate the potential of optical coherence tomography (OCT) to distinguish between normal and pathologic thyroid tissue, 3D OCT images were acquired on ex vivo thyroid samples from adult subjects (n=22) diagnosed with a variety of pathologies. The follicular structure was analyzed in terms of count, size, density and sphericity. Results showed that OCT images highly agreed with the corresponding histopatology and the calculated parameters were representative of the follicular structure variation. The analysis of OCT volumes provides quantitative information that could make automatic classification possible. Thus, OCT can be beneficial for intraoperative surgical guidance or in the pathology assessment routine.
The infiltrative nature of malignant gliomas results in active tumor spreading into the peritumoral edema, which is not visible in conventional magnetic resonance imaging (cMRI) even after contrast injection. MR relaxometry (qMRI) measures relaxation rates dependent on tissue properties, and can offer additional contrast mechanisms to highlight the non-enhancing infiltrative tumor. The aim of this study is to investigate if qMRI data provides additional information compared to cMRI sequences (T1w, T1wGd, T2w, FLAIR), when considering deep learning-based brain tumor (1) detection and (2) segmentation. A total of 23 patients with histologically confirmed malignant glioma were retrospectively included in the study. Quantitative MR imaging was used to obtain R1(1/T1), R2(1/T2) and proton density maps pre- and post-gadolinium contrast injection. Conventional MR imaging was also performed. A 2D CNN detection model and a 2D U-Net were trained on transversal slices (n=528) using either cMRI or a combination of qMRI pre- and post-contrast data for tumor detection and segmentation, respectively. Moreover, trends in quantitative R1and R2rates of regions identified as relevant for tumor detection by model explainability methods were qualitatively analyzed. Tumor detection and segmentation performance for models trained with a combination of qMRI pre- and post-contrast was the highest (detection MCC=0.72, segmentation Dice=0.90), however, improvements were not statistically significant compared to cMRI (detection MCC=0.67, segmentation Dice=0.90). The analysis of the relaxation rates of the relevant regions identified using model explainability methods showed no differences between models trained on cMRI or qMRI. Relevant regions which fell outside the annotation showed changes in relaxation rates after contrast injection similar to those within the annotation, when looking at majority of the individual cases. A similar trend could not be seen when looking at relaxation trends over all the dataset. In conclusion, models trained on qMRI data obtain similar performance to those trained on cMRI data, with the advantage of quantitatively measuring brain tissue properties within the scan time (11.8 minutes for qMRI with and without contrast, and 12.2 minutes for cMRI). Moreover, when considering individual patients, regions identified by model explainability methods as relevant for tumor detection outside the manual annotation of the tumor showed changes in quantitative relaxation rates after contrast injection similar to regions within the annotation, suggestive of infiltrative tumor in the peritumoral edema.
Intraoperative guidance tools for thyroid surgery based on optical coherence tomography (OCT) could aid distinguish between normal and diseased tissue. However, OCT images are difficult to interpret, thus, real‐time automatic analysis could support the clinical decision‐making. In this study, several deep learning models were investigated for thyroid disease classification on 2D and 3D OCT data obtained from ex vivo specimens of 22 patients undergoing surgery and diagnosed with several thyroid pathologies. Additionally, two open‐access datasets were used to evaluate the custom models. On the thyroid dataset, the best performance was achieved by the 3D vision transformer model with a Matthew's correlation coefficient (MCC) of 0.79 (accuracy = 0.90) for the normal‐versus‐abnormal classification. On the open‐access datasets, the custom models achieved the best performance (MCC > 0.88, accuracy > 0.96). Results obtained for the normal‐versus‐abnormal classification suggest OCT, complemented with deep learning‐based analysis, as a tool for real‐time automatic diseased tissue identification in thyroid surgery.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.