<p>Breast ultrasound images are highly valuable for the early detection of breast cancer. However, the drawback of these images is low-quality resolution and the presence of speckle noise, which affects their interpretability and makes them radiologists’ expertise-dependent. As medical images, breast ultrasound datasets are scarce and imbalanced, and annotating them is tedious and time-consuming. Transfer learning, as a deep learning technique, can be used to overcome the dataset deficiency in available images. This paper presents the implementation of transfer learning U-Net backbones for the automatic segmentation of breast ultrasound lesions and implements a threshold selection mechanism to deliver optimal generalized segmentation results of breast tumors. The work uses the public breast ultrasound images (BUSI) dataset and implements ten state-of-theart candidate models as U-Net backbones. We have trained these models with a five-fold cross-validation technique on 630 images with benign and malignant cases. Five out of ten models showed good results, and the best U-Net backbone was found to be DenseNet121. It achieved an average Dice coefficient of 0.7370 and a sensitivity of 0.7255. The model’s robustness was also evaluated against normal cases, and the model accurately detected 72 out of 113 images, which is higher than the four best models.</p>
Computer-aided diagnosis has the potential to replace or at least support medical personnel in their everyday responsibilities such as diagnosis, therapy, and surgery. In the area of ophthalmology, artificial intelligence approaches have been incorporated in the diagnosis of the most frequent ocular disorders, such as choroidal neovascularization (CNV), diabetic macular oedema (DMO), and DRUSEN; these illnesses pose a significant risk of vision loss. Optical coherence tomography (OCT) is an imaging technology used to diagnose the aforementioned eye disorders. It enables ophthalmologists to see the back of the eye and take various slices of the retina. The goal of this research is to automate the diagnosis of retinopathy, which includes CNV, DME, and DRUSEN. The approach employed is a deep learning-based, and transfer learning technique, applying to a public dataset of OCT pictures and two pertained neural network models VGG16 and InceptionV3, which are trained on the big database "ImageNet." That allows them to be able to extract the main features of millions of images. Furthermore, fine-tuning approaches are applied to outperform the feature extraction method, by modifying the hyperparameters. The findings showed that the VGG16 model performed better in classification than the InceptionV3 architecture, with a 0.93 accuracy.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.