Search citation statements
Paper Sections
Citation Types
Year Published
Publication Types
Relationship
Authors
Journals
MRI imaging primarily focuses on the soft tissues of the human body, typically performed prior to a patient's transfer to the surgical suite for a medical procedure. However, utilizing MRI images for tumor diagnosis is a time-consuming process. To address these challenges, a new method for automatic brain tumor diagnosis was developed, employing a combination of image segmentation, feature extraction, and classification techniques to isolate the specific region of interest in an MRI image corresponding to a brain tumor. The proposed method in this study comprises five distinct steps. Firstly, image pre-processing is conducted, utilizing various filters to enhance image quality. Subsequently, image thresholding is applied to facilitate segmentation. Following segmentation, feature extraction is performed, analyzing morphological and structural properties of the images. Then, feature selection is carried out using principal component analysis (PCA). Finally, classification is performed using an artificial neural network (ANN). In total, 74 unique features were extracted from each image, resulting in a dataset of 144 observations. Principal component analysis was employed to select the top 8 most effective features. Artificial Neural Networks (ANNs) leverage comprehensive data and selective knowledge. Consequently, the proposed approach was evaluated and compared with alternative methods, resulting in significant improvements in precision, accuracy, and F1 score. The proposed method demonstrated notable increases in accuracy, with improvements of 99.3%, 97.3%, and 98.5% in accuracy, Sensitivity and F1 score. These findings highlight the efficiency of this approach in accurately segmenting and classifying MRI images.
MRI imaging primarily focuses on the soft tissues of the human body, typically performed prior to a patient's transfer to the surgical suite for a medical procedure. However, utilizing MRI images for tumor diagnosis is a time-consuming process. To address these challenges, a new method for automatic brain tumor diagnosis was developed, employing a combination of image segmentation, feature extraction, and classification techniques to isolate the specific region of interest in an MRI image corresponding to a brain tumor. The proposed method in this study comprises five distinct steps. Firstly, image pre-processing is conducted, utilizing various filters to enhance image quality. Subsequently, image thresholding is applied to facilitate segmentation. Following segmentation, feature extraction is performed, analyzing morphological and structural properties of the images. Then, feature selection is carried out using principal component analysis (PCA). Finally, classification is performed using an artificial neural network (ANN). In total, 74 unique features were extracted from each image, resulting in a dataset of 144 observations. Principal component analysis was employed to select the top 8 most effective features. Artificial Neural Networks (ANNs) leverage comprehensive data and selective knowledge. Consequently, the proposed approach was evaluated and compared with alternative methods, resulting in significant improvements in precision, accuracy, and F1 score. The proposed method demonstrated notable increases in accuracy, with improvements of 99.3%, 97.3%, and 98.5% in accuracy, Sensitivity and F1 score. These findings highlight the efficiency of this approach in accurately segmenting and classifying MRI images.
Aims This research paper aims to check the effectiveness of a variety of machine learning models in classifying esophageal cancer through MRI scans. The current study encompasses Convolutional Neural Network (CNN), K-Nearest Neighbor (KNN), Recurrent Neural Network (RNN), and Visual Geometry Group 16 (VGG16), among others which are elaborated in this paper. This paper aims to identify the most accurate model to facilitate increased, improved diagnostic accuracy to revolutionize early detection methods for this dreadful disease. The ultimate goal is, therefore, to improve the clinical practice performance and its results with advanced machine learning techniques in medical diagnosis. Background Esophageal cancer poses a critical problem for medical oncologists since its pathology is quite complex, and the death rate is exceptionally high. Proper early detection is essential for effective treatment and improved survival. The results are positive, but the conventional diagnostic methods are not sensitive and have low specificity. Recent progress in machine learning methods brings a new possibility to high sensitivity and specificity in the diagnosis. This paper explores the potentiality of different machine-learning models in classifying esophageal cancer through MRI scans to complement the constraints of the traditional diagnostics approach. Objective This study is aimed at verifying whether CNN, KNN, RNN, and VGG16, amongst other advanced machine learning models, are effective in correctly classifying esophageal cancer from MRI scans. This review aims at establishing the diagnostic accuracy of all these models, with the best among all. It plays a role in developing early detection mechanisms that increase patient outcome confidence in the clinical setting. Methods This study applies the approach of comparative analysis by using four unique machine learning models to classify esophageal cancer from MRI scans. This was made possible through the intensive training and validation of the model using a standardized set of MRI data. The model’s effectiveness was assessed using performance evaluation metrics, which included accuracy, precision, recall, and F1 score. Results In classifying esophageal cancers from MRI scans, the current study found VGG16 to be an adequate model, with a high accuracy of 96.66%. CNN took the second position, with an accuracy of 94.5%, showing efficient results for spatial pattern recognition. The model of KNN and RNN also showed commendable performance, with accuracies of 91.44% and 88.97%, respectively, portraying their strengths in proximity-based learning and handling sequential data. These findings underline the potential to add significant value to the processes of esophageal cancer diagnosis using machine learning models. Conclusion The study concluded that machine learning techniques, mainly VGG16 and CNN, had a high potential for escalated diagnostic precision in classifying esophageal cancer from MRI imaging. VGG16 showed great accuracy, while CNN displayed advanced spatial detection, followed by KNN and RNN. Thus, the results set new opportunities for introducing advanced computational models to the clinics, which might transform strategies for early detection to improve patient-centered outcomes in oncology.
Aim This research work aimed to combine different AI methods to create a modular diagnosis system for lung cancer, including Convolutional Neural Network (CNN), K-Nearest Neighbors (KNN), VGG16, and Recurrent Neural Network (RNN) on MRI biomarkers. Models have then been evaluated and compared in their effectiveness in detecting cancer, using a meticulously selected dataset containing 2045 MRI images, with emphasis being put on documenting the benefits of the multimodal approach for attacking the complexities of the disease. Background Lung cancer remains the most common cause of cancer death in the world, partly because of the challenges in diagnosis and the late stage of presentation. Although Magnetic Resonance Imaging (MRI) has become a critical modality in the identification and staging of lung cancer, too often, its effectiveness is curtailed by the interpretative variance among radiologists. Recent advances in machine learning hold great promise for augmenting the analysis of MRI and perhaps even increasing diagnostic accuracy with the start of timely treatment. In this work, the integration of advanced machine learning models with MRI biomarkers to solve these problems has been investigated. Objective The purpose of the present paper was to assess the effectiveness of integrating various machine-learning models with MRI biomarkers for lung cancer diagnostics, such as CNN, KNN, VGG16, and RNN. The dataset involved 2,045 MRI images, and the performances of the models were investigated by comparing their performance metrics to determine the best configuration of interconnection while underpinning the necessity of this multimodal approach for accurate diagnoses and, consequently, better patient outcomes. Methods For this study, we used 2045 MRI images, with 70% for training and 30% for validation. We used four machine-learning models to work on the photos: CNN, KNN, VGG16, and RNN. Systematic performance measures were included in the study: accuracy, recall, precision, and F1 score. The confusion matrices of this study compared the diagnostic power of every model to comprehend the pragmatic use of the models in a real-world predictive capability. Results The scores for the model were found to be better with the convolutional neural network in terms of recall, accuracy in measures tested, precision, and F1. The rest of the models, KNN, VGG16, and RNN, performed decently but were slightly lower in performance than CNN. The in-depth analysis through confusion matrices thus established the predictive reliability of the models in revealing immense insight into the capability of identifying true positives and minimizing false negatives in enhancing the diagnostic accuracy of lung cancer detection. Conclusion The findings obtained have shown further support and great potential for integrating advanced machine learning models with MRI biomarkers to improve lung cancer diagnosis. The high performance of CNN, high sensitivity and specificity of the KNN model, and robustness of results obtained from VGG16 and RNN models have pointed to the potential feasibility of AI in the accurate detection of cancer. Our work has shown strong support for this multimodal diagnostic approach, which might impact future practice in oncology through the integration of AI to improve treatment strategies and patient outcomes in medical imaging.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.