Search citation statements
Paper Sections
Citation Types
Year Published
Publication Types
Relationship
Authors
Journals
In the era of cloud computing and large-scale distributed systems, ensuring uninterrupted service and operational reliability is crucial. Conventional fault tolerance techniques usually take a reactive approach, addressing problems only after they arise. This can result in performance deterioration and downtime. With predictive machine learning models, this research offers a proactive approach to fault tolerance for distributed systems, preventing significant failures before they arise. Our research focuses on combining cutting-edge machine learning algorithms with real-time analysis of massive streams of operational data to predict abnormalities in the system and possible breakdowns. We employ supervised learning algorithms such as Random Forests and Gradient Boosting to predict faults with high accuracy. The predictive models are trained on historical data, capturing intricate patterns and correlations that precede system faults. Early defect detection made possible by this proactive approach enables preventative remedial measures to be taken, reducing downtime and preserving system integrity. To validate our approach, we designed and implemented a fault prediction framework within a simulated distributed system environment that mirrors contemporary cloud architectures. Our experiments demonstrate that the predictive models can successfully forecast a wide range of faults, from hardware failures to network disruptions, with significant lead time, providing a critical window for implementing preventive measures. Additionally, we assessed the impact of these pre-emptive actions on overall system performance, highlighting improved reliability and a reduction in mean time to recovery (MTTR). We also analyse the scalability and adaptability of our proposed solution within diverse and dynamic distributed environments. Through seamless integration with existing monitoring and management tools, our framework significantly enhances fault tolerance capabilities without requiring extensive restructuring of current systems. This work introduces a proactive approach to fault tolerance in distributed systems using predictive machine learning models. Unlike traditional reactive methods that respond to failures after they occur, this work focuses on anticipating faults before they happen.
In the era of cloud computing and large-scale distributed systems, ensuring uninterrupted service and operational reliability is crucial. Conventional fault tolerance techniques usually take a reactive approach, addressing problems only after they arise. This can result in performance deterioration and downtime. With predictive machine learning models, this research offers a proactive approach to fault tolerance for distributed systems, preventing significant failures before they arise. Our research focuses on combining cutting-edge machine learning algorithms with real-time analysis of massive streams of operational data to predict abnormalities in the system and possible breakdowns. We employ supervised learning algorithms such as Random Forests and Gradient Boosting to predict faults with high accuracy. The predictive models are trained on historical data, capturing intricate patterns and correlations that precede system faults. Early defect detection made possible by this proactive approach enables preventative remedial measures to be taken, reducing downtime and preserving system integrity. To validate our approach, we designed and implemented a fault prediction framework within a simulated distributed system environment that mirrors contemporary cloud architectures. Our experiments demonstrate that the predictive models can successfully forecast a wide range of faults, from hardware failures to network disruptions, with significant lead time, providing a critical window for implementing preventive measures. Additionally, we assessed the impact of these pre-emptive actions on overall system performance, highlighting improved reliability and a reduction in mean time to recovery (MTTR). We also analyse the scalability and adaptability of our proposed solution within diverse and dynamic distributed environments. Through seamless integration with existing monitoring and management tools, our framework significantly enhances fault tolerance capabilities without requiring extensive restructuring of current systems. This work introduces a proactive approach to fault tolerance in distributed systems using predictive machine learning models. Unlike traditional reactive methods that respond to failures after they occur, this work focuses on anticipating faults before they happen.
Present research highlights the need for more patient-oriented monitoring systems for cardiac health, especially in the aftermath of COVID-19. The study introduces a contactless and affordable ECG device capable of recording heart arrhythmias for remote monitoring, which is vital in managing the rising incidence of untimely heart attacks. Two deep learning algorithms have been developed to design the system: RCANN (Real-time Compressed Artificial Neural Network) and RCCNN (Real-time Compressed Convolutional Neural Network), respectively, based on ANN and CNN. These methods are designed to classify and analyze three different forms of ECG datasets: raw, filtere and filtered + compressed signals. These were developed in this study to identify the most suitable type of dataset that can be utilized for regular/remote monitoring. This data is prepared using online ECG signals from Physionet(ONLINE) and the developed real-time signals from Arduino ECG sensor device. Performance is analysed on the basis of accuracy, sensitivity, specificity and F1 score for all kinds of designed ECG databases using both RCCNN and RCANN. For raw data, accuracy is 99.2%, sensitivity is 99.7%, specificity is 99.2%, and F1-Score is 99.2%. For RCCNN, accuracy is 93.2%, sensitivity is 91.5%, specificity is 95.1%, and F1-Score is 93.5% for RCANN. For Filtered Data, accuracy is 97.7%, sensitivity is 95.9%, specificity is 99.4%, and F1-Score is 97.6%. For RCCNN, accuracy is 90.5%, sensitivity is 85.8%, specificity is 96.4%, and F1-Score is 90.9% for RCANN. For Filtered + compressed data, accuracy is 96.6%, sensitivity is 97.6%, specificity is 95.7%, and F1-Score is 96.5%. For RCCNN, accuracy is 85.2%, sensitivity is 79.2%, specificity is 94.5%, and F1-Score is 86.7% for RCANN. The performance evaluation shows that RCCNN with filtered and compressed datasets outperforms other approaches for telemonitoring and makes it a promising approach for individualized cardiac health management.
In order to provide immediate support and medical care to identify Alzheimer's disease (AD) as early as possible. By analysing patterns and features in large datasets, these approaches can identify subtle changes in brain structure, function, or biomarkers that may indicate the presence of the Disease at an early stage. Early detection allows for timely intervention and treatment, potentially improving patient outcomes. Using MRI and PET scans image datasets particular to Alzheimer's disease. This study compares the performance of several pre-trained models, like VGG-16, VGG-19, RESNET 50, INCEPTION V3 and Desnet121 with the proposed model ResNet-53. The main goal is to assess and compare how well these models are able to discriminate between healthy people and people with AD. By comparing each model's accuracy and precision, we use transfer learning to optimize them all. The performance of the RESNET-53 is strong to classify the AD and the accuracy is 99.65%. Our findings showed significant differences in performance, with certain models exhibiting higher accuracy in particular imaging modalities. In the proposed model the preprocessing will be initialized by a zero centering process then combined Gaussian filter with bilateral filter. For feature extraction, ResNet is used for its residual connections. In the ResNet architecture first layers are freezed and the last 3 layers are customized for feature extraction. The study emphasizes how integrating deep learning approaches with a variety of imaging modalities may enhance diagnosis accuracy. The accuracy obtained using VGG 16, VGG 19, ResNet 101, RESNET 50, DenseNet 121 and Inception V3 models are 89.61%, 92.81%, 96.32%, 95.27%, 97.80% and 96.44%. The proposed model provides a classification accuracy of 99.65%. The proposed model ResNet 53 has more accuracy. “ResNet-53 outperforms baseline models, achieving a precision of 98.96%, recall of 95%, and an F1-score of 96.97%, which demonstrates its ability to handle class imbalance more effectively than previous approaches.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.