The diagnosis and surgical resection using Magnetic Resonance (MR) images in brain tumors is a challenging task to minimize the neurological defects after surgery owing to the non-linear nature of the size, shape, and textural variation. Radiologists, clinical experts, and brain surgeons examine brain MRI scans using the available methods, which are tedious, error-prone, time-consuming, and still exhibit positional accuracy up to 2–3 mm, which is very high in the case of brain cells. In this context, we propose an automated Ultra-Light Brain Tumor Detection (UL-BTD) system based on a novel Ultra-Light Deep Learning Architecture (UL-DLA) for deep features, integrated with highly distinctive textural features, extracted by Gray Level Co-occurrence Matrix (GLCM). It forms a Hybrid Feature Space (HFS), which is used for tumor detection using Support Vector Machine (SVM), culminating in high prediction accuracy and optimum false negatives with limited network size to fit within the average GPU resources of a modern PC system. The objective of this study is to categorize multi-class publicly available MRI brain tumor datasets with a minimum time thus real-time tumor detection can be carried out without compromising accuracy. Our proposed framework includes a sensitivity analysis of image size, One-versus-All and One-versus-One coding schemes with stringent efforts to assess the complexity and reliability performance of the proposed system with K-fold cross-validation as a part of the evaluation protocol. The best generalization achieved using SVM has an average detection rate of 99.23% (99.18%, 98.86%, and 99.67%), and F-measure of 0.99 (0.99, 0.98, and 0.99) for (glioma, meningioma, and pituitary tumors), respectively. Our results have been found to improve the state-of-the-art (97.30%) by 2%, indicating that the system exhibits capability for translation in modern hospitals during real-time surgical brain applications. The method needs 11.69 ms with an accuracy of 99.23% compared to 15 ms achieved by the state-of-the-art to earlier to detect tumors on a test image without any dedicated hardware providing a route for a desktop application in brain surgery.
The evolving "Industry 4.0" domain encompasses a collection of future industrial developments with cyber-physical systems (CPS), Internet of things (IoT), big data, cloud computing, etc. Besides, the industrial Internet of things (IIoT) directs data from systems for monitoring and controlling the physical world to the data processing system. A major novelty of the IIoT is the unmanned aerial vehicles (UAVs), which are treated as an efficient remote sensing technique to gather data from large regions. UAVs are commonly employed in the industrial sector to solve several issues and help decision making. But the strict regulations leading to data privacy possibly hinder data sharing across autonomous UAVs. Federated learning (FL) becomes a recent advancement of machine learning (ML) which aims to protect user data. In this aspect, this study designs federated learning with blockchain assisted image classification model for clustered UAV networks (FLBIC-CUAV) on IIoT environment. The proposed FLBIC-CUAV technique involves three major processes namely clustering, blockchain enabled secure communication and FL based image classification. For UAV cluster construction process, beetle swarm optimization (BSO) algorithm with three input parameters is designed to cluster the UAVs for effective communication. In addition, blockchain enabled secure data transmission process take place to transmit the data from UAVs to cloud servers. Finally, the cloud server uses an FL with Residual Network model to carry out the image classification process. A wide range of simulation analyses takes place for ensuring the betterment of the FLBIC-CUAV approach. The experimental outcomes portrayed the betterment of the FLBIC-CUAV approach over the recent state of art methods.
Currently, social media plays an important role in daily life and routine. Millions of people use social media for different purposes. Large amounts of data flow through online networks every second, and these data contain valuable information that can be extracted if the data are properly processed and analyzed. However, most of the processing results are affected by preprocessing difficulties. This paper presents an approach to extract information from social media Arabic text. It provides an integrated solution for the challenges in preprocessing Arabic text on social media in four stages: data collection, cleaning, enrichment, and availability. The preprocessed Arabic text is stored in structured database tables to provide a useful corpus to which, information extraction and data analysis algorithms can be applied. The experiment in this study reveals that the implementation of the proposed approach yields a useful and full-featured dataset and valuable information. The resultant dataset presented the Arabic text in three structured levels with more than 20 features. Additionally, the experiment provides valuable information and processed results such as topic classification and sentiment analysis.
Recently, the COVID-19 epidemic has had a major impact on day-to-day life of people all over the globe, and it demands various kinds of screening tests to detect the coronavirus. Conversely, the development of deep learning (DL) models combined with radiological images is useful for accurate detection and classification. DL models are full of hyperparameters, and identifying the optimal parameter configuration in such a high dimensional space is not a trivial challenge. Since the procedure of setting the hyperparameters requires expertise and extensive trial and error, metaheuristic algorithms can be employed. With this motivation, this paper presents an automated glowworm swarm optimization (GSO) with an inception-based deep convolutional neural network (IDCNN) for COVID-19 diagnosis and classification, called the GSO-IDCNN model. The presented model involves a Gaussian smoothening filter (GSF) to eradicate the noise that exists from the radiological images. Additionally, the IDCNN-based feature extractor is utilized, which makes use of the Inception v4 model. To further enhance the performance of the IDCNN technique, the hyperparameters are optimally tuned using the GSO algorithm. Lastly, an adaptive neuro-fuzzy classifier (ANFC) is used for classifying the existence of COVID-19. The design of the GSO algorithm with the ANFC model for COVID-19 diagnosis shows the novelty of the work. For experimental validation, a series of simulations were performed on benchmark radiological imaging databases to highlight the superior outcome of the GSO-IDCNN technique. The experimental values pointed out that the GSO-IDCNN methodology has demonstrated a proficient outcome by offering a maximal sensy of 0.9422, specy of 0.9466, precn of 0.9494, accy of 0.9429, and F1score of 0.9394.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.