One of the main challenges faced by users of infrastructure-as-a-service (IaaS) clouds is the difficulty to adequately estimate the virtual resources necessary for their applications. Although many cloud providers offer programatic ways to rapidly acquire and release resources, it is important that users have a prior understanding of the impact that each virtual resource type offered by the provider may impose on application performance. This paper presents Cloud Crawler, a new declarative environment aimed at supporting users in describing and automatically executing application performance tests in IaaS clouds. To this end, the environment provides a novel declarative domain-specific language, called Crawl, which supports the description of a variety of performance evaluation scenarios in multiple IaaS clouds; and an extensible Java-based cloud execution engine, called Crawler, which automatically configures, executes and collects the results of each performance evaluation scenario described in Crawl. To illustrate Cloud Crawler's potential benefits, the paper reports on an experimental evaluation of a social network application in two public IaaS cloud providers, in which the proposed environment has successfully been used to investigate the application performance for different virtual machine configurations and under different demand levels.
Cloud computing technologies are changing the way in which services are deployed and operated nowadays, introducing advantages such as a great degree of flexibility (e.g. pay-peruse models, automatic scalability, etc.). However, existing offerings (Amazon EC2, GoGrid, etc.) are based on proprietary service definition mechanisms, thus introducing vendor lock-in to the customers who deploy their services on those clouds. On the other hand, there are open standards that address the problem of packaging and distributing virtual appliances (i.e. complete software stacks deployed in one or more virtual machines), but they have not been designed specifically for clouds. This paper proposes a service specification language for cloud computing platforms, based on the DMTF's Open Virtualization Format standard, extending it to address the specific requirements of these environments. In order to assess the feasibility of our proposal, we have implemented a prototype system able to deploy and scale service specifications using the proposed extensions. Additionally, practical results are presented based on an industrial case study that demonstrates using the software prototype how to automatically deploy and flexibly scale the Sun Grid Engine application.
SUMMARYAs the number of infrastructure-as-a-service (IaaS) cloud offers in the market increases, selecting an appropriate configuration of cloud resources for a given application becomes a non-trivial task even for experienced developers. Because cloud resources are relatively cheap, usually charged by the hour, developers could systematically evaluate the performance of their application using different resource types from different cloud providers, thus allowing them to accurately identify the best providers and resource types for their application. However, conducting systematic performance tests in multiple IaaS clouds may require a significant amount of planning and configuration effort from application developers. This paper presents Cloud Crawler, a declarative environment for specifying and conducting application performance tests in IaaS clouds. The environment includes a novel declarative domain-specific language, Crawl, by means of which cloud users can describe, at a high abstraction level, a large variety of performance evaluation scenarios for a given application, and a scenario execution engine, Crawler, which automatically configures, executes, and collects the results of the scenarios described in Crawl. The paper also reports on how Cloud Crawler has been successfully used to systematically test the performance of two open-source web applications in public IaaS clouds.
Most current aspect composition mechanisms rely on syntactic references to the base modules or wildcard mechanisms quantifying over such syntactic references in pointcut expressions. This leads to the well-known problem of pointcut fragility. Semantics-based composition mechanisms aim to alleviate such fragility by focusing on the meaning and intention of the composition hence avoiding strong syntactic dependencies on the base modules. However, to date, there are no empirical studies validating whether semanticsbased composition mechanisms are indeed more expressive and less fragile compared to their syntax-based counterparts. In this paper we present a first study comparing semantics-and syntax-based composition mechanisms in aspect-oriented requirements engineering. In our empirical study the semantics-based compositions examined were found to be indeed more expressive and less fragile. The semantics-based compositions in the study also required one to reason about composition interdependencies early on hence potentially reducing the overhead of revisions arising from later trade-off analysis and stakeholder negotiations. However, this added to the overhead of specifying the compositions themselves. Furthermore, since the semantics-based compositions considered in the study were based on natural language analysis, they required initial effort investment into lexicon building as well as strongly depended on advanced tool support to expose the natural language semantics.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.