Dementias that develop in older people test the limits of modern medicine. As far as dementia in older people goes, Alzheimer’s disease (AD) is by far the most prevalent form. For over fifty years, medical and exclusion criteria were used to diagnose AD, with an accuracy of only 85 per cent. This did not allow for a correct diagnosis, which could be validated only through postmortem examination. Diagnosis of AD can be sped up, and the course of the disease can be predicted by applying machine learning (ML) techniques to Magnetic Resonance Imaging (MRI) techniques. Dementia in specific seniors could be predicted using data from AD screenings and ML classifiers. Classifier performance for AD subjects can be enhanced by including demographic information from the MRI and the patient’s preexisting conditions. In this article, we have used the Alzheimer’s Disease Neuroimaging Initiative (ADNI) dataset. In addition, we proposed a framework for the AD/non-AD classification of dementia patients using longitudinal brain MRI features and Deep Belief Network (DBN) trained with the Mayfly Optimization Algorithm (MOA). An IoT-enabled portable MR imaging device is used to capture real-time patient MR images and identify anomalies in MRI scans to detect and classify AD. Our experiments validate that the predictive power of all models is greatly enhanced by including early information about comorbidities and medication characteristics. The random forest model outclasses other models in terms of precision. This research is the first to examine how AD forecasting can benefit from using multimodal time-series data. The ability to distinguish between healthy and diseased patients is demonstrated by the DBN-MOA accuracy of 97.456%, f-Score of 93.187 %, recall of 95.789 % and precision of 94.621% achieved by the proposed technique. The experimental results of this research demonstrate the efficacy, superiority, and applicability of the DBN-MOA algorithm developed for the purpose of AD diagnosis.
In wireless personal area networks (WPANs), devices can communicate with each other without relying on a central router or access point. They can improve performance and efficiency by allowing devices to share resources directly; however, managing resource allocation and optimizing communication between devices can be challenging. This paper proposes an intelligent load-based resource optimization model to enhance the performance of device-to-device communication in 5G-WPAN. Intelligent load-based resource optimization in device-to-device communication is a strategy used to maximize the efficiency and effectiveness of resource usage in device-to-device (D2D) communications. This optimization strategy is based on optimizing the network’s resource load by managing resource utilization and ensuring that the network is not overloaded. It is achieved by monitoring the current load on the network and then adjusting the usage of resources, such as bandwidth and power, to optimize the overall performance. This type of optimization is essential in D2D communication since it can help reduce costs and improve the system’s performance. The proposed model has achieved 86.00% network efficiency, 93.74% throughput, 91.94% reduced latency, and 92.85% scalability.
The field of electroencephalography (EEG) has made significant contributions to our understanding of the brain, our understanding of neurological diseases, and our ability to treat such diseases. Epileptic seizures, strokes, and even death can all be detected with the use of the electroencephalogram, a diagnostic technique used to record electrical activity in the brain. This research suggests using binary classification for automated epilepsy diagnosis. Patients' EEG signals are pre‐processed after being recorded. On the basis of the results of the feature extraction technique, the best traits are picked for further examination by means of a structured genetic algorithm. The EEG data are analysed and categorized as either seizure‐free or epileptic seizure‐related based on the assumption of feature optimization utilizing the support vector classifier. As a result, categorizing EEG signals is an ideal application for the suggested technique. For this purpose of accelerating the implementation of distributed computing, a CEHOC (Chaotic Elephant Herding Optimization based Classification) is used to classify the vast scope of various datasets. The results show that the CEHOC algorithm is more effective than previous versions. Precision, recall, F score, sensitivity, specificity, and accuracy are some of the metrics used to assess the effectiveness of the work provided here. The suggested work has a 99.3019% accuracy rate, a 98.2018% sensitivity rate, and a 99.1125% specificity rate. There was an F score of 99.3204%, a precision of 99.1019%, and a recall of 98.3015%. These numbers indicate that the planned action was successful.
Emerging consumer devices rely on the next generation IoT for connected support to undergo the much-needed digital transformation. The main challenge for next-generation IoT is to fulfil the requirements of robust connectivity, uniform coverage and scalability to reap the benefits of automation, integration and personalization. Next generation mobile networks, including beyond 5G and 6G technology, play an important role in delivering intelligent coordination and functionality among the consumer nodes. This paper presents a 6G-enabled scalable cell-free IoT network that guarantees uniform quality-of-service (QoS) to the proliferating wireless nodes or consumer devices. By enabling the optimal association of nodes with the APs, it offers efficient resource management. A scheduling algorithm is proposed for the cell-free model such that the interference caused by the neighbouring nodes and neighbouring APs is minimised. The mathematical formulations are obtained to carry out the performance analysis with different precoding schemes. Further, the allocation of pilots for obtaining the association with minimum interference is managed using different pilot lengths. It is observed that the proposed algorithm offers an improvement of 18.9% in achieved spectral efficiency using partial regularized zero-forcing (PRZF) precoding scheme at pilot length τp=10. In the end, the performance comparison with two other models incorporating random scheduling and no scheduling at all is carried out. As compared to random scheduling, the proposed scheduling shows improvement of 10.9% in obtained spectral efficiency by 95% of the user nodes.
Display refrigerators consume significantly high energy, and improving their efficiency is essential to minimize energy consumption and greenhouse gas emissions. Therefore, providing the refrigeration system with a reliable and energy-efficient mechanism is a real challenge. This study aims to design and evaluate an intelligent control system (ICS) using artificial neural networks (ANN) for the performance optimization of solar-powered display refrigerators (SPDRs). The SPDR was operated using the traditional control system at a fixed frequency of 60 Hz and then operated based on variable frequencies ranging from 40 to 60 Hz using the designed ANN-based ICS combined with a variable speed drive. A stand-alone PV system provided the refrigerator with the required energy at the two control options. For the performance evaluation, the operating conditions of the SPDR after the modification of its control system were compared with its performance with a traditional control system (TCS) at target refrigeration temperatures of 1, 3, and 5 °C and ambient temperatures of 23, 29, and 35 °C. Based on the controlled variable frequency speed by the modified control system (MCS), the power, energy consumption, and coefficient of performance (COP) of the SPDR are improved. The results show that both refrigeration control mechanisms maintain the same cooling temperature, but the traditional refrigerator significantly consumes more energy (p < 0.05). At the same target cooling temperature, increasing the ambient temperature decreased the COP for the SPDR with both the TCS and MCS. The average daily COP of the SPDR varied from 2.8 to 3.83 and from 1.91 to 2.82 for the SPDR with the TCS and MCS, respectively. The comparison results of the two refrigerators’ conditions indicated that the developed ICS for the SPDR saved about 35.5% of the energy at the 5 °C target cooling temperature and worked with smoother power when the ambient temperature was high. The COP of the SPDR with the MCS was higher than the TCS by 26.37%, 26.59%, and 24.22% at the average daily ambient temperature of 23 °C, 29 °C, and 35 °C, respectively. The developed ANN-based control system optimized the SPDR and proved to be a suitable tool for the refrigeration industry.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.