Search citation statements
Paper Sections
Citation Types
Year Published
Publication Types
Relationship
Authors
Journals
Cloud computing utilizes heterogeneous resources that are located in various datacenters to provide an efficient performance on a pay-per-use basis. However, existing mechanisms, frameworks, and techniques for management of resources are inadequate to manage these applications, environments, and the behavior of resources. There is a requirement of a Quality of Service (QoS) based autonomic resource management technique to execute workloads and deliver cost-efficient and reliable cloud services automatically. In this paper, we present an intelligent and autonomic resource management technique named RADAR. RADAR focuses on two properties of self-management: firstly, self-healing that handles unexpected failures and, secondly, self-configuration of resources and applications. The performance of RADAR is evaluated in the cloud simulation environment and the experimental results show that RADAR delivers better outcomes in terms of execution cost, resource contention, execution time, and SLA violation while it delivers reliable services. KEYWORDScloud computing, quality of service, resource provisioning, resource scheduling, self-configuring, self-healing, self-management, service level agreement INTRODUCTIONCloud computing offers various services like Software as a Service (SaaS), Platform as a Service (PaaS), and Infrastructure as a Service (IaaS).However, providing dedicated cloud services that ensure various Quality of Service (QoS) requirements of a cloud user and avoid Service Level Agreement (SLA) violations is a difficult task. Based on the availability of cloud resources, dynamic services are provided without ensuring the required QoS. 1 To fulfill the QoS requirements of user applications, the cloud provider should change its ecosystem. 2 Self-management of cloud services is needed to provide required services and fulfill the QoS requirements of the user automatically.Autonomic management of resources manages the cloud service automatically as per the requirement of the environment, therefore maximizing resource utilization and cost-effectiveness while ensuring the maximum reliability and availability of the service. 3 Based on human guidance, a self-managed system keeps itself stable in uncertain situations and adapts rapidly to new environmental situations such as network, hardware, or software failures. 4 QoS based autonomic systems are inspired by biological systems, which can manage the challenges such as dynamism, uncertainty, and heterogeneity. IBM's autonomic model 3 based cloud computing system considers MAPE-k loop (Monitor, Analyze, Plan, and Execute) and its objective is to execute workloads within their budget and deadline by satisfying the QoS requirements of the cloud consumer. An autonomic system considers the following properties while managing cloud resources 1-3 :• Self-healing recognizes, analyzes, and recovers from the unexpected failures automatically.• Self-configuring adapts to the changes in the environment automatically.In this paper, we have developed a technique for self-configuRin...
Cloud computing utilizes heterogeneous resources that are located in various datacenters to provide an efficient performance on a pay-per-use basis. However, existing mechanisms, frameworks, and techniques for management of resources are inadequate to manage these applications, environments, and the behavior of resources. There is a requirement of a Quality of Service (QoS) based autonomic resource management technique to execute workloads and deliver cost-efficient and reliable cloud services automatically. In this paper, we present an intelligent and autonomic resource management technique named RADAR. RADAR focuses on two properties of self-management: firstly, self-healing that handles unexpected failures and, secondly, self-configuration of resources and applications. The performance of RADAR is evaluated in the cloud simulation environment and the experimental results show that RADAR delivers better outcomes in terms of execution cost, resource contention, execution time, and SLA violation while it delivers reliable services. KEYWORDScloud computing, quality of service, resource provisioning, resource scheduling, self-configuring, self-healing, self-management, service level agreement INTRODUCTIONCloud computing offers various services like Software as a Service (SaaS), Platform as a Service (PaaS), and Infrastructure as a Service (IaaS).However, providing dedicated cloud services that ensure various Quality of Service (QoS) requirements of a cloud user and avoid Service Level Agreement (SLA) violations is a difficult task. Based on the availability of cloud resources, dynamic services are provided without ensuring the required QoS. 1 To fulfill the QoS requirements of user applications, the cloud provider should change its ecosystem. 2 Self-management of cloud services is needed to provide required services and fulfill the QoS requirements of the user automatically.Autonomic management of resources manages the cloud service automatically as per the requirement of the environment, therefore maximizing resource utilization and cost-effectiveness while ensuring the maximum reliability and availability of the service. 3 Based on human guidance, a self-managed system keeps itself stable in uncertain situations and adapts rapidly to new environmental situations such as network, hardware, or software failures. 4 QoS based autonomic systems are inspired by biological systems, which can manage the challenges such as dynamism, uncertainty, and heterogeneity. IBM's autonomic model 3 based cloud computing system considers MAPE-k loop (Monitor, Analyze, Plan, and Execute) and its objective is to execute workloads within their budget and deadline by satisfying the QoS requirements of the cloud consumer. An autonomic system considers the following properties while managing cloud resources 1-3 :• Self-healing recognizes, analyzes, and recovers from the unexpected failures automatically.• Self-configuring adapts to the changes in the environment automatically.In this paper, we have developed a technique for self-configuRin...
Summary Resource management is one of the major issue in cloud computing for IaaS. Among several resource management problems allocation, provisioning, and requirement mapping directly affects the performance of cloud. Resource allocation signifies assignment of available resources to different workloads in an economically optimal manner. Precise and accurate allocation is required to maximize the usage of resources. Current method of task allocation do not take previously acquired knowledge, type of the tasks, and the QoS parameters altogether into account in the allocation phase, and it has not been trained for different set of tasks. Furthermore, the self‐optimization of the autonomous system fails to address the task type and identify the relationship between tasks and the resource demands along with its requirements. Important aspects like task management and resource utilization are the primary factors to consider for such a characteristic. This paper will present a novel autonomic resource management framework named task‐aware autonomic resource allocation strategy using neural networks (TARNN), which aims to use knowledge about the behavior of the task over an extended period of time and use this knowledge to allocate resources when a similar task is submitted in future by the user. To effectively do the allocation, a neural network–based approach is adopted to classify the tasks appropriately based on the task parameters, task type, and QoS parameters and allocate the resource optimally for a new task autonomously, without the intervention of the cloud provider. Moreover, to identify and improve the relationship of the tasks with the resources in the context of scheduling, we have proposed a novel modified Particle Swarm Optimization (m‐PSO) algorithm to schedule the tasks to resources based on resource demands. In TARNN, we have separated the collected synthetic dataset into 60‐40 ratio for training and testing purposes. We found that the neural network–based approach provides almost 80% accurate classification w.r.t. task type and QoS parameters. We have also compared our results with support vector machine (SVM) and got 69% accuracy. Since the tasks are classified appropriately, the occurrence of resource reconfiguration and VM migration is drastically reduced. Hence, our system provides better allocation of resources and schedules the tasks appropriately to the resources, thereby improving the performance of the cloud.
Summary The advent of big data technologies has changed the way many companies manage their data. Several companies moved their data to the cloud using the concept of database‐as‐a‐service (DBaaS). Moving databases to the cloud presents several challenges related to flexible and scalable management of data. Although some of these companies migrated to NoSQL databases, most still rely on relational databases in the cloud to manage data, especially data that is critical to the decision making process. Online analytical processing (OLAP) queries take a long time to be processed, thus demanding high‐performance capabilities from their associated database systems to get results in a feasible time. In this article, we propose a middleware solution that can be deployed in any cloud provider, named C‐ParGRES, which explores database replication and interquery and intraquery parallelism to efficiently support OLAP queries in the cloud. C‐ParGRES is an extension of ParGRES, an open‐source database cluster middleware for high‐performance OLAP query processing in clusters. C‐ParGRES exploits cloud capabilities such as on‐demand resource provisioning and elasticity. In addition, C‐ParGRES can create multiple and independent virtual clusters for different database and users. We evaluate C‐ParGRES with two real‐world OLAP applications, both from the Brazilian Institute of Geography and Statistics. Results show that C‐ParGRES is a cost‐effective solution for OLAP query processing in the cloud.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.