The need for effective and fair resource allocation in cloud computing has been identified in the literature and in industrial contexts for a while. Cloud computing seen as a promising technology, offers usage-based payment, scalable and on-demand computing resources. However, during the past decade, the growing complexity of the IT world has resulted in making Quality of Service (QoS) in the cloud a challenging subject and an NP-hard problem. Specifically, the fair allocation of resources in the cloud becomes particularly interesting when many users submit several tasks which require multiple resources. Research in this area has been increasing since 2012 by introducing the Dominant Resource Fairness (DRF) algorithm as an initial attempt to solve the fair resource allocation problem in the cloud. Although DRF meets a sort of desirable fairness properties, it has been proven to be inefficient in certain conditions. Noticeably, DRF and other works in its extension are not intuitively fair after all. Those implementations have been unable to utilize all the resources in the system, leaving the system in an imbalanced situation with respect to each specific system resource. In order to address those issues, we propose in this paper a novel algorithm namely a Fully Fair Multi-Resource Allocation Algorithm in Cloud Environments (FFMRA) which allocates resources in a fully fair way considering both dominant and non-dominant shares. The results from the experiments conducted in CloudSim show that FFMRA provides approximately 100% recourse utilization, and distributing them fairly among the users while meeting desirable fairness features.
Cloud computing is a novel paradigm which provides on demand, scalable and pay-as-you-use computing resources in a virtualized form. With cloud computing, users are able to access large pools of resources anywhere without any limitation. In order to use the provided facilities by the cloud in an efficient way, the management of resources is an undeniable fact that should be considered in different aspects. Among all those aspects, resource allocation has received much attentions. Given the fact that the cloud is heterogeneous, the allocation of resources has to become more sophisticated. As a first promising work to deal with that problem, Dominant Resource Fairness (DRF) has been proposed which takes into account dominant shares of users. Although DRF has a sort of desirable fairness properties, it has some limitations that have already been identified in the literature. Unfortunately, DRF and its recent developments are not intuitively fair with respect to various resource demands. In this paper, we propose a Multi-level Fair Dominant Resource Scheduling (MLF-DRS) algorithm as a new allocation model inspired by Max-Min fairness and proportionality. Unlike other works that they equalize dominant shares of different resource types which leads to starvation in the maximization of allocation for some users, our algorithm guarantees that each user receives the resources they desire for based on dominant shares. As can be deducted from the mathematical proofs, MLF-DRS provides a full utilization of resources and meets some of the desirable fair allocation properties and it is applicable to be used in a naïve extension form in the presence of multiple servers as well.
Containerization has become a new approach that facilitates application deployment and delivers scalability, productivity, security, and portability. As a first promising platform, Docker was proposed in 2013 to automate the deployment of applications. There are many advantages of Docker for delivering cloud native services. However, its widespread use has revealed problems such as performance overhead. In order to deal with those problems, Kubernetes was introduced in 2015 as a container orchestration platform to simplify the management of containers. Kubernetes simplifies managing a large scale number of docker containers, however, the fairness is a missing point in the Kubernetes that has been applied in other platforms such as Apache Hadoop, YARN and Mesos. Assigning resource limits fairly among the pods in kubernetes becomes a challenging issue as some applications may require intensive resources such as CPU and memory that should be maximized to satisfy them. In order to do that, in this paper, we practice a novel way to assign resource limits fairly among the pods in the Kubernetes environment.
Concept drift, which refers to changes in the underlying process structure or customer behaviour over time, is inevitable in business processes, causing challenges in ensuring that the learned model is a proper representation of the new data. Due to factors such as seasonal effects and policy updates, concept drifts can occur in customer transitions and time spent throughout the process, either suddenly or gradually. In a concept drift context, we can discard the old data and retrain the model using new observations (sudden drift) or combine the old data with the new data to update the model (gradual drift) or maintain the model as unchanged (no drift). In this paper, we model a response to concept drift as a sequential decision making problem by combing a hierarchical Markov model and a Markov decision process (MDP). The approach can detect concept drift, retrain the model and update customer profiles automatically. We validate the proposed approach on 68 artificial datasets and a real-world hospital billing dataset, with experimental results showing promising performance.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.