A key feature in virtualization technology is the Live Migration, which allows a Virtual Machine (VM) to be moved from a physical host to another without execution interruption. This feature enables the implementation of more sophisticated policies inside a cloud environment, such as energy and computational resources optimization, and improvement of quality-of-service. However live migration can impose severe performance degradation for the VM application and cause multiple impacts in service provider infrastructure, such as network congestion and colocated VM performance degradation. Different of several studies we consider the VM workload an important factor and we argue that carefully choosing a proper moment to migrate a VM can reduce the live migration penalties. This paper introduces a method to identify the workload cycles of a VM and based on that information it can postpone a Live Migration. In our experiments, using relevant benchmarks the proposed method was able to reduce up to 43% of network data transfer and reduce up to 74% of live migration time when compared to traditional consolidation strategies that perform live migration without considering the VM workload.
Abstract. Elasticity is an important feature in cloud computing environments. This feature allows a Virtual Machine to adapt resource allocation according to the nature of its workload. Until now, most memory elasticity implementations require human intervention. The implementation of memory elasticity is not very straightforward, due to old Operating System concepts; in general an Operating System assumes that all installed memory will be static and will not increase or decrease until the next shutdown. This paper compares two techniques for the implementation of memory elasticity, one based on the concept of Exponential Moving Average and the other based on Page Faults. To compare these modes of implementation, a method to measure allocation efficiency based on the space-time product was used. With an Exponential Moving Average, memory could be used more efficiently. When Page Faults were used as the main criteria to allocate or remove memory, the performance improved when compared to the Exponential Moving Average technique.
Workload characterization is an important feature in a cloud environment. Using a fast and accurate characterization cloud providers can allocate virtual machines in physical hosts that best fit a specific workload and improve the overall performance without new investments. Current strategies of workload characterization are based on complex algorithms that are difficult to apply in a cloud environment with thousands of virtual machines running. Other strategies to characterize virtual machines rely on several changes in a hypervisor, or virtual machine layer, and are hypervisordependent. This paper presents a hypervisor agnostic characterization methodology that uses standard metrics of Processor and Memory utilization, available in SNMP. Collected data are normalized and applied to a low computational cost decision tree, that is able to characterize a virtual machine in a customizable time window. As evaluation, some tests were performed in different hypervisors (KVM, Xen and VMWare) running spec benchmark and in real workloads, such as Hadoop Cluster in Rackspace and a production Web Server running in a VMWare Farm. Results showed that our methodology is able to infer a very accurate characterization.
Entre as principais motivações de adesão à Computação na Nuvem pode-se citar a otimização de recursos computacionais e controle de custos. A melhora no uso de recursos computacionais deve ser alcançada tanto da perspectiva do usuário como do provedor. Entretanto, diferente do que ocorre em Data Centers tradicionais, os recursos da Nuvem são compartilhados entre diferentes usuários e, em geral, o provedor de serviços possui pouco ou nenhuma informação sobre o tipo de carga de trabalho submetido nas máquinas virtuais. Esta cenário pode levar a uma situação de distribuição de carga ruim resultando em violações de SLA e QoS. Através de uma metodologia analítica, este artigo apresenta a avaliação de duas estratégias de caracterização de carga de trabalho, ambas baseadas em técnicas de Aprendizagem de Máquina (Naive Bayes e Árvores de Decisão). Além disso, este trabalho discute e apresenta alguns índices de carga que podem ser coletados por agentes SNMP, impondo pouca sobrecarga ao sistema (em torno de 2%). Os resultados mostram que as Árvores de Decisão são mais rápidas, mas mais sensíveis na variação das métricas. Já o Naive Bayes possui maior precisão em algumas situações, mas os dados precisam ser discretizados para que possam ser utilizados.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.