Gliomas are the most common primary brain tumors. They are classified into 4 grades (Grade I–II-III–IV) according to the guidelines of the World Health Organization (WHO). The accurate grading of gliomas has clinical significance for planning prognostic treatments, pre-diagnosis, monitoring and administration of chemotherapy. The purpose of this study is to develop a deep learning-based classification method using radiomic features of brain tumor glioma grades with deep neural network (DNN). The classifier was combined with the discrete wavelet transform (DWT) the powerful feature extraction tool. This study primarily focuses on the four main aspects of the radiomic workflow, namely tumor segmentation, feature extraction, analysis, and classification. We evaluated data from 121 patients with brain tumors (Grade II, n = 77; Grade III, n = 44) from The Cancer Imaging Archive, and 744 radiomic features were obtained by applying low sub-band and high sub-band 3D wavelet transform filters to the 3D tumor images. Quantitative values were statistically analyzed with MannWhitney U tests and 126 radiomic features with significant statistical properties were selected in eight different wavelet filters. Classification performances of 3D wavelet transform filter groups were measured using accuracy, sensitivity, F1 score, and specificity values using the deep learning classifier model. The proposed model was highly effective in grading gliomas with 96.15% accuracy, 94.12% precision, 100% recall, 96.97% F1 score, and 98.75% Area under the ROC curve. As a result, deep learning and feature selection techniques with wavelet transform filters can be accurately applied using the proposed method in glioma grade classification.
Breast cancer is the most common cancer that progresses from cells in the breast tissue among women. Early-stage detection could reduce death rates significantly, and the detection-stage determines the treatment process. Mammography is utilized to discover breast cancer at an early stage prior to any physical sign. However, mammography might return false-negative, in which case, if it is suspected that lesions might have cancer of chance greater than two percent, a biopsy is recommended. About 30 percent of biopsies result in malignancy that means the rate of unnecessary biopsies is high. So to reduce unnecessary biopsies, recently, due to its excellent capability in soft tissue imaging, Dynamic Contrast-Enhanced Magnetic Resonance Imaging (DCE-MRI) has been utilized to detect breast cancer. Nowadays, DCE-MRI is a highly recommended method not only to identify breast cancer but also to monitor its development, and to interpret tumorous regions. However, in addition to being a time-consuming process, the accuracy depends on radiologists’ experience. Radiomic data, on the other hand, are used in medical imaging and have the potential to extract disease characteristics that can not be seen by the naked eye. Radiomics are hard-coded features and provide crucial information about the disease where it is imaged. Conversely, deep learning methods like convolutional neural networks(CNNs) learn features automatically from the dataset. Especially in medical imaging, CNNs’ performance is better than compared to hard-coded features-based methods. However, combining the power of these two types of features increases accuracy significantly, which is especially critical in medicine. Herein, a stacked ensemble of gradient boosting and deep learning models were developed to classify breast tumors using DCE-MRI images. The model makes use of radiomics acquired from pixel information in breast DCE-MRI images. Prior to train the model, radiomics had been applied to the factor analysis to refine the feature set and eliminate unuseful features. The performance metrics, as well as the comparisons to some well-known machine learning methods, state the ensemble model outperforms its counterparts. The ensembled model’s accuracy is 94.87% and its AUC value is 0.9728. The recall and precision are 1.0 and 0.9130, respectively, whereas F1-score is 0.9545.
Coronavirus-caused diseases are common worldwide and might worsen both human health and the world economy. Most people may instantly encounter coronavirus in their life and may result in pneumonia. Nowadays, the world is fighting against the new coronavirus: COVID-19. The rate of increase is high, and the world got caught the disease unprepared. In most regions of the world, COVID-19 test is not possible due to the absence of the diagnostic kit, even if the kit exists, its false-negative (giving a negative result for a person infected with COVID-19) rate is high. Also, early detection of COVID-19 is crucial to keep its morbidity and mortality rates low. The symptoms of pneumonia are alike, and COVID-19 is no exception. The chest X-ray is the main reference in diagnosing pneumonia. Thus, the need for radiologists has been increased considerably not only to detect COVID-19 but also to identify other abnormalities it caused. Herein, a transfer learning-based multi-class convolutional neural network model was proposed for the automatic detection of pneumonia and also for differentiating non-COVID-19 pneumonia and COVID-19. The model that inputs chest X-ray images is capable of extracting radiographic patterns on chest X-ray images to turn into valuable information and monitor structural differences in the lungs caused by the diseases. The model was developed by two public datasets: Cohen dataset and Kermany dataset. The model achieves an average training accuracy of 0.9886, an average training recall of 0.9829, and an average training precision of 0.9837. Moreover, the average training false-positive and false-negative rates are 0.0085 and 0.0171, respectively. Conversely, the model’s test set metrics such as average accuracy, average recall, and average precision are 97.78%, 96.67%, and 96.67%, respectively. According to the simulation results, the proposed model is promising, can quickly and accurately classify chest images, and helps doctors as the second reader in their final decision.
Diabetes mellitus is a common disease worldwide. In progressive diabetes patients, deterioration of kidney histology tissue begins. Currently, the histopathologic examination of kidney tissue samples has been performed manually by pathologists. This examination process is time-consuming and requires pathologists' expertise. Thus, automatic detection methods are crucial for early detection and also treatment planning. Computer-aided diagnostic systems based on deep learning show high success rates in classifying medical images if a large and diverse image set is available during the training process. Herein, transfer learningbased convolutional neural network model was proposed for the automatic detection of diabetes mellitus using only rat kidney histopathology images. The model monitors structural changes, especially in the glomerulus and also other parts of the kidney caused by the damages of diabetes. According to the simulation results, the proposed model has reached 97.5% accuracy. As a result, the recommended model can quickly and accurately classify histopathology images and helps pathologists as the second reader in critical situations
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.