An accurate segmentation and quantification of the superficial foveal avascular zone (sFAZ) is important to facilitate the diagnosis and treatment of many retinal diseases, such as diabetic retinopathy and retinal vein occlusion. We proposed a method based on deep learning for the automatic segmentation and quantification of the sFAZ in optical coherence tomography angiography (OCTA) images with robustness to brightness and contrast (B/C) variations. A dataset of 405 OCTA images from 45 participants was acquired with Zeiss Cirrus HD-OCT 5000 and the ground truth (GT) was manually segmented subsequently. A deep learning network with an encoder-decoder architecture was created to classify each pixel into an sFAZ or non-sFAZ class. Subsequently, we applied largestconnected-region extraction and hole-filling to fine-tune the automatic segmentation results. A maximum mean dice similarity coefficient (DSC) of 0.976 ± 0.011 was obtained when the automatic segmentation results were compared against the GT. The correlation coefficient between the area calculated from the automatic segmentation results and that calculated from the GT was 0.997. In all nine parameter groups with various brightness/contrast, all the DSCs of the proposed method were higher than 0.96. The proposed method achieved better performance in the sFAZ segmentation and quantification compared to two previously reported methods. In conclusion, we proposed and successfully verified an automatic sFAZ segmentation and quantification method based on deep learning with robustness to B/C variations. For clinical applications, this is an important progress in creating an automated segmentation and quantification applicable to clinical analysis.
Background: Microvascular invasion (MVI) has a significant effect on the prognosis of hepatocellular carcinoma (HCC), but its preoperative identification is challenging. Radiomics features extracted from medical images, such as magnetic resonance (MR) images, can be used to predict MVI. In this study, we explored the effects of different imaging sequences, feature extraction and selection methods, and classifiers on the performance of HCC MVI predictive models.Methods: After screening against the inclusion criteria, 69 patients with HCC and preoperative gadoxetic acid-enhanced MR images were enrolled. In total, 167 features were extracted from the MR images of each sequence for each patient. Experiments were designed to investigate the effects of imaging sequence, number of gray levels (Ng), quantization algorithm, feature selection method, and classifiers on the performance of radiomics biomarkers in the prediction of HCC MVI. We trained and tested these models using leave-oneout cross-validation (LOOCV).Results: The radiomics model based on the images of the hepatobiliary phase (HBP) had better predictive performance than those based on the arterial phase (AP), portal venous phase (PVP), and preenhanced T1-weighted images [area under the receiver operating characteristic (ROC) curve (AUC) =0.792 vs. 0.641/0.634/0.620, P=0.041/0.021/0.010, respectively]. Compared with the equal-probability and Lloyd-Max algorithms, the radiomics features obtained using the Uniform quantization algorithm had a better performance (AUC =0.643/0.666 vs. 0.792, P=0.002/0.003, respectively). Among the values of 8, 16, 32, 64, and 128, the best predictive performance was achieved when the Ng was 64 (AUC =0.792
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.