Context: The impact of coronavirus disease 2019 (COVID-19) pandemic on vaccine-preventable diseases, including diphtheria, may hamper the previous gains made in the eradication of the disease. Aims: We report the epidemiological profile, clinical features, laboratory findings, and hospitalization outcomes amongst cases of diphtheria managed at Federal Medical Centre, Katsina, Nigeria during the first wave of COVID-19 pandemic. Settings and Design: This was a retrospective review of cases of diphtheria managed between July and December 2020. Methods and Material: We extracted the clinical (socio-demographics, clinical features, and hospitalization outcomes) and laboratory findings (full blood counts, electrolytes, urea and creatinine) from the record of the children. Statistical Analysis Used: Using SPSS, we carried out a descriptive analysis and applied binary logistic regression to determine factors associated with death. Level of statistical significance was set at P < 0.05. Results: A total of 35 cases of diphtheria were admitted and managed from 1 July to 31 December 2020. The mean age of the children was 7.6 ± 3.1 years. Males were 15 (42.9%). There were 24 deaths (case fatality of 68.6%). Clinical findings were comparable between survivors and non-survivors except the bull neck, which was more common among non-survivors (P = 0.022). The median duration of hospitalization was shorter in those that died (P = 0.001). The age, sex, immunization status, leukocytosis, and biochemical features of renal impairments were not predictive of deaths. Prescence of bull neck was predictive of death (adjusted odds ratio 2.115, 95% CI 1.270, 3.521). Conclusions: The study shows a high number of cases of diphtheria over a short period of six months with high mortality. Amongst the clinical and laboratory variables, only presence of bull neck was predictive of death.
Failure is an increasingly important issue in high performance computing and cloud systems. As large-scale systems continue to grow in scale and complexity, mitigating the impact of failure and providing accurate predictions with sufficient lead time remains a challenging research problem. Traditional existing fault-tolerance strategies such as regular checkpointing and replication are not adequate because of the emerging complexities of high performance computing systems. This necessitates the importance of having an effective as well as proactive failure management approach in place aimed at minimizing the effect of failure within the system. With the advent of machine learning techniques, the ability to learn from past information to predict future pattern of behaviours makes it possible to predict potential system failure more accurately. Thus, in this paper, we explore the predictive abilities of machine learning by applying a number of algorithms to improve the accuracy of failure prediction. We have developed a failure prediction model using time series and machine learning, and performed comparison based tests on the prediction accuracy. The primary algorithms we considered are the Support Vector Machine (SVM), Random Forest(RF), k-Nearest Neighbors (KNN), Classification and Regression Trees (CART) and Linear Discriminant Analysis (LDA). Experimental results indicates that the average prediction accuracy of our model using SVM when predicting fail
Failure in a cloud system is defined as an even that occurs when the delivered service deviates from the correct intended behavior. As the cloud computing systems continue to grow in scale and complexity, there is an urgent need for cloud service providers (CSP) to guarantee a reliable on-demand resource to their customers in the presence of faults thereby fulfilling their service level agreement (SLA). Component failures in cloud systems are very familiar phenomena. However, large cloud service providers' data centers should be designed to provide a certain level of availability to the business system. Infrastructure-as-a-service (Iaas) cloud delivery model presents computational resources (CPU and memory), storage resources and networking capacity that ensures high availability in the presence of such failures. The data in-production-faults recorded within a 2 years period has been studied and analyzed from the National Energy Research Scientific computing center (NERSC). Using the real-time data collected from the Computer Failure Data Repository (CFDR), this paper presents the performance of two machine learning (ML) algorithms, Linear Regression (LR) Model and Support Vector Machine (SVM) with a Linear Gaussian kernel for predicting hardware failures in a real-time cloud environment to improve system availability. The performance of the two algorithms have been rigorously evaluated using K-folds cross-validation technique. Furthermore, steps and procedure for future studies has been presented. This research will aid computer hardware companies and cloud service providers (CSP) in designing a reliable fault-tolerant system by providing a better device selection, thereby improving system availability and minimizing unscheduled system downtime.
Summary Cloud fault tolerance is an important issue in cloud computing platforms and applications. In the event of an unexpected system failure or malfunction, a robust fault‐tolerant design may allow the cloud to continue functioning correctly possibly at a reduced level instead of failing completely. To ensure high availability of critical cloud services, the application execution, and hardware performance, various fault‐tolerant techniques exist for building self‐autonomous cloud systems. In comparison with current approaches, this paper proposes a more robust and reliable architecture using optimal checkpointing strategy to ensure high system availability and reduced system task service finish time. Using pass rates and virtualized mechanisms, the proposed smart failover strategy (SFS) scheme uses components such as cloud fault manager, cloud controller, cloud load balancer, and a selection mechanism, providing fault tolerance via redundancy, optimized selection, and checkpointing. In our approach, the cloud fault manager repairs faults generated before the task time deadline is reached, blocking unrecoverable faulty nodes as well as their virtual nodes. This scheme is also able to remove temporary software faults from recoverable faulty nodes, thereby making them available for future request. We argue that the proposed SFS algorithm makes the system highly fault tolerant by considering forward and backward recovery using diverse software tools. Compared with existing approaches, preliminary experiment of the SFS algorithm indicates an increase in pass rates and a consequent decrease in failure rates, showing an overall good performance in task allocations. We present these results using experimental validation tools with comparison with other techniques, laying a foundation for a fully fault‐tolerant infrastructure as a service cloud environment. Copyright © 2017 John Wiley & Sons, Ltd.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.