Deep networks require a considerable amount of training data otherwise these networks generalize poorly. Data Augmentation techniques help the network generalize better by providing more variety in the training data. Standard data augmentation techniques such as flipping, and scaling, produce new data that is a modified version of the original data. Generative Adversarial networks (GANs) have been designed to generate new data that can be exploited. In this paper, we propose a new GAN model, named StynMedGAN for synthetically generating medical images to improve the performance of classification models. StynMedGAN builds upon the state-of-the-art styleGANv2 that has produced remarkable results generating all kinds of natural images. We introduce a regularization term that is a normalized loss factor in the existing discriminator loss of styleGANv2. It is used to force the generator to produce normalized images and penalize it if it fails. Medical imaging modalities, such as X-Rays, CT-Scans, and MRIs are different in nature, we show that the proposed GAN extends the capacity of styleGANv2 to handle medical images in a better way. This new GAN model (StynMedGAN) is applied to three types of medical imaging: X-Rays, CT scans, and MRI to produce more data for the classification tasks. To validate the effectiveness of the proposed model for the classification, 3 classifiers (CNN, DenseNet121, and VGG-16) are used. Results show that the classifiers trained with StynMedGAN-augmented data outperform other methods that only used the original data. The proposed model achieved 100%, 99.6%, and 100% for chest X-Ray, Chest CT-Scans, and Brain MRI respectively. The results are promising and favor a potentially important resource that can be used by practitioners and radiologists to diagnose different diseases.
BACKGROUND: Medical image processing has gained much attention in developing computer-aided diagnosis (CAD) of diseases. CAD systems require deep understanding of X-rays, MRIs, CT scans and other medical images. The segmentation of the region of interest (ROI) from those images is one of the most crucial tasks. OBJECTIVE: Although active contour model (ACM) is a popular method to segment ROIs in medical images, the final segmentation results highly depend on the initial placement of the contour. In order to overcome this challenge, the objective of this study is to investigate feasibility of developing a fully automated initialization process that can be optimally used in ACM to more effectively segment ROIs. METHODS: In this study, a fully automated initialization algorithm namely, an adaptive Otsu-based initialization (AOI) method is proposed. Using this proposed method, an initial contour is produced and further refined by the ACM to produce accurate segmentation. For evaluation of the proposed algorithm, the ISIC-2017 Skin Lesion dataset is used due to its challenging complexities. RESULTS: Four different supervised performance evaluation metrics are employed to measure the accuracy and robustness of the proposed algorithm. Using this AOI algorithm, the ACM significantly (p≤0.05) outperforms Otsu thresholding method with 0.88 Dice Score Coefficients (DSC) and 0.79 Jaccard Index (JI) and computational complexity of 0(mn). CONCLUSIONS: After comparing proposed method with other state-of-the-art methods, our study demonstrates that the proposed methods is superior to other skin lesion segmentation methods, and it requires no training time, which also makes the new method more efficient than other deep learning and machine learning methods.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.