Search citation statements
Paper Sections
Citation Types
Year Published
Publication Types
Relationship
Authors
Journals
Elasticity is the capability of an application to adapt its resource usage to the current workload of the system. Existing platform-as-a-service (PaaS) solutions concentrate on web applications and do not support elasticity for other kinds of applications. To change this, in this paper, the vision and partial implementation for a component-based PaaS cloud is proposed consisting of a programming model and an autonomic manager for safeguarding non-functional application requirements. To illustrate this vision, the paper focuses on two key features of a PaaS infrastructure, the scale-out of services and the on-demand deployment of application resources. The former allows for increasing the performance by distributing service calls among several service providers working in parallel, and the latter refers to the automatic packaging and transfer of needed deployment units. It will be shown how these aspects have been solved within the Jadex PaaS cloud infrastructure. Both concepts have been validated using an application scenario, which highlights the advantages of automatic service scaling and deployment for developers and users.An example is a complex calculation algorithm. If it is deployed as is in the cloud, there is no elasticity regarding the size of the calculation problem, as according to Amdahl's law [5], the computation speed cannot be increased by scaling out. The developer would have to split the computation manually into parallelizable chunks, for example, by applying suitable algorithmic skeletons [6] such as map-reduce.The work presented in this paper aims at supporting the development of cloud-aware componentbased applications. Programming and adaptation management concepts and a PaaS infrastructure are introduced, which empower developers to achieve application elasticity. This paper expands on a vision of elastic component-based applications presented earlier [7]. The vision focuses on programming and adaptation management concepts that allow considering explicit non-functional requirements (NFRs) and non-functional properties (NFPs) seamlessly during design, implementation, and operation of applications. In addition, this paper especially presents required features for a PaaS infrastructure suitable to realize this vision. An initial prototype of such a PaaS infrastructure is presented with a focus on service scale-out and on-demand deployment, as well as an application scenario running on the prototype infrastructure.This paper is structured as follows. In Section 2, the vision of elastic component-based applications and the chosen approach are presented. As two fundamental building blocks of the approach, Section 3 introduces a generic component model for elastic application programming, and Section 4 presents adaptation management for elastic applications. In Section 5, details of the elastic PaaS infrastructure are presented by discussing how service scale-out and on-demand deployment can be realized. Afterwards, Section 6 discusses related work, and Section 7 finally concludes the paper. VISI...
Elasticity is the capability of an application to adapt its resource usage to the current workload of the system. Existing platform-as-a-service (PaaS) solutions concentrate on web applications and do not support elasticity for other kinds of applications. To change this, in this paper, the vision and partial implementation for a component-based PaaS cloud is proposed consisting of a programming model and an autonomic manager for safeguarding non-functional application requirements. To illustrate this vision, the paper focuses on two key features of a PaaS infrastructure, the scale-out of services and the on-demand deployment of application resources. The former allows for increasing the performance by distributing service calls among several service providers working in parallel, and the latter refers to the automatic packaging and transfer of needed deployment units. It will be shown how these aspects have been solved within the Jadex PaaS cloud infrastructure. Both concepts have been validated using an application scenario, which highlights the advantages of automatic service scaling and deployment for developers and users.An example is a complex calculation algorithm. If it is deployed as is in the cloud, there is no elasticity regarding the size of the calculation problem, as according to Amdahl's law [5], the computation speed cannot be increased by scaling out. The developer would have to split the computation manually into parallelizable chunks, for example, by applying suitable algorithmic skeletons [6] such as map-reduce.The work presented in this paper aims at supporting the development of cloud-aware componentbased applications. Programming and adaptation management concepts and a PaaS infrastructure are introduced, which empower developers to achieve application elasticity. This paper expands on a vision of elastic component-based applications presented earlier [7]. The vision focuses on programming and adaptation management concepts that allow considering explicit non-functional requirements (NFRs) and non-functional properties (NFPs) seamlessly during design, implementation, and operation of applications. In addition, this paper especially presents required features for a PaaS infrastructure suitable to realize this vision. An initial prototype of such a PaaS infrastructure is presented with a focus on service scale-out and on-demand deployment, as well as an application scenario running on the prototype infrastructure.This paper is structured as follows. In Section 2, the vision of elastic component-based applications and the chosen approach are presented. As two fundamental building blocks of the approach, Section 3 introduces a generic component model for elastic application programming, and Section 4 presents adaptation management for elastic applications. In Section 5, details of the elastic PaaS infrastructure are presented by discussing how service scale-out and on-demand deployment can be realized. Afterwards, Section 6 discusses related work, and Section 7 finally concludes the paper. VISI...
Abstract.It is a time-honored fashion to implement a domain-specific language (DSL) by translation to a general-purpose language. Such an implementation is more portable, but an unidiomatic translation jeopardizes performance because, in practice, language implementations favor the common cases. This tension arises especially when the domain calls for complex control structures. We illustrate this tension by revisiting Landin's original correspondence between Algol and Church's lambdanotation.We translate domain-specific programs with lexically scoped jumps to JavaScript. Our translation produces the same block structure and binding structure as in the source program,à la Abdali. The target code uses a control operator in direct style,à la Landin. In fact, the control operator used is almost Landin's J-hence our title. Our translation thus complements a continuation-passing translationà la Steele. These two extreme translations require JavaScript implementations to cater either for first-class continuations, as Rhino does, or for proper tail recursion. Less extreme translations should emit more idiomatic control-flow instructions such as for, break, and throw.The present experiment leads us to conclude that translations should preserve not just the data structures and the block structure of a source program, but also its control structure. We thus identify a new class of use cases for control structures in JavaScript, namely the idiomatic translation of control structures from DSLs.
For the last decade, mobile devices have grown in popularity and became the best-selling computing devices. Despite their high capabilities for user interactions and network connectivity, the computing power of mobile devices is low and the lifetime of the application running on them limited by the battery. Mobile Cloud Computing (MCC) is a technology that tackles the limitations of mobile devices by bringing together their mobility with the vast computing power of the Cloud. Programming applications for Mobile Cloud Computing (MCC) environments is not as straightforward as coding monolithic applications. Developers have to deal with the issues related to parallel programming for distributed infrastructures while considering the battery lifetime and the variability of the network produced by the high mobility of this kind of devices. As with any other distributed environment, developers turn to programming models to improve their productivity by avoiding the complexity of manually dealing with these issues and delegate on the corresponding model all the management of these concerns. This thesis contributes to the current state of the art with an adaptation of the COMPSs programming model for MCC environments. COMPSs allows application programmers to code their applications in a sequential, infrastructure-agnostic fashion without calls to any COMPSs-specific API using the native language for the target platform as if they were to run on the mobile device. At execution time, a runtime system automatically partitions the application into tasks and orchestrates their execution on top of the available resources. This thesis contributes with an extension to the programming model to allow task polymorphism and let the runtime exploit computational resources other than the CPU of the resources. Besides, the runtime architecture has been redesigned with the characteristics of MCC in mind, and it runs as a common service which all the applications running simultaneously on the mobile device contact for submitting the execution of their tasks. For collaboratively exploiting both, local and remote resources, the runtime clusters the computational devices into Computing Platforms according to the mechanisms required to provide the processing elements with the necessary input values, launch the task execution avoiding resource oversubscription and fetching the results back from them. The CPU Platform run tasks on the cores of the CPU. The GPU Platform leverages on OpenCL to run tasks as kernels on GPUs or other accelerators embedded in the mobile device. Finally, the Cloud Platform offloads the execution of tasks onto remote resources. To holistically decide whether is worth running a task on embedded or on remote resources, the runtime considers the the costs -- time, energy and money -- of running the computation on each of the platforms and picks the best. Each platform manages internally its resources and orchestrates the execution of tasks on them using different scheduling policies. Using local and remote computing devices forces the runtime to share data values among the nodes of the infrastructure. This data is potentially privacy-sensitive, and the runtime exposes it to possible attackers when transferring it through the network. To protect the application user from data leaks, the runtime has to provide communications with secrecy, integrity and authenticity. In the extreme case of a network breakdown that isolates the mobile device from the remote nodes, the runtime has to ensure that the execution continues to provide the application user with the expected result even if the connection never re-establishes. The mobile device has to respond using only the resources embedded in it, what could incur in the re-execution of computations already ran on the remote resources. Remote workers have to continue with the execution so that, in case of reconnection, both parts synchronize its progress to reduce the impact of the disruption. Els últims anys, els dispositius mòbils han guanyat en popularitat i s'han convertit en els dispositius més venuts. Tot i la connectivitat i la bona interacció amb l'usuari que ofereixen, la seva capacitat de càlcul is baixa i limitada per la vida de la bateria. El Mobile Cloud Computing (MCC) és una tecnologia que soluciona les limitacions d'aquests dispositius ajuntant la seva mobilitat amb la gran capacitat de còmput del Cloud. Programar aplicacions per entorns MCC no és tan directe com fer aplicacions monolítiques. Els desenvolupadors han de tractar amb els problemes relacionats amb la programació paral·lela mentre tenen en compte la duració de la bateria i la variabilitat de la xarxa degut a la mobilitat inherent a aquest tipus de dispositius. Com per qualsevol altre entorn distribuït, els desenvolupadors recorren a models de programació que millorin la seva productivitat i els evitin tractar manualment amb aquests problemes delegant la seva gestió en el model. Aquesta tesis contribueix a l'estat de l'art actual amb una adaptació del model de programació COMPSs als entorns MCC. COMPSs permet als desenvolupadors programar les aplicacions de forma agnòstica a la infraestructura i seqüencial sense necessitat d'invocar cap API específica utilitzant el llenguatge natiu de la platforma com si l'aplicació s'executés directament en el mòbil. En temps d'execució, una eina (runtime) automàticament divideix l'aplicació en tasques i n'orquestra la seva execució sobre els recursos disponibles. Aquesta tesis estèn el model de programació per tal de permetre polimorfisme a nivell de tasca i deixar al runtime explotar els recursos computacionals dels que disposa el mòbil a part de la CPU. A més a més, l'arquitectura del runtime s'ha redissenyat tenint en compte les característiques pròpies del MCC, i aquest s'executa com un servei comú al que totes les aplicacions del mòbil contacten per tal d'executar les seves tasques. Per explotar col·laborativament tots els recursos, locals i remots, el runtime agrupa els recursos en Computing Platforms en funció dels mecanismes necessaris per proveir el recurs amb les dades d'entrada necessàries, llançar l'execució i recuperar-ne els resultats. La CPU Platform executa tasques en els nuclis de la CPU. La GPU Platform utilitza OpenCL per executar tasques en forma de kernels a la GPU o altres acceleradors integrats en el mòbil. Finalment, la Cloud Platform descàrrega l'execució de tasques en recursos remots. Per decidir holisticament si és millor executar una tasca en un recurs local o en un remot, el runtime considera els costs (temporal, energètic econòmic) d'executar la tasca en cada una de les plataformes i n'escull la millor. Cada plataforma gestiona internament els seus recursos i orquestra l'execució de les tasques en ells seguint diferents polítiques de planificació. L'ús de recursos locals i remots força la compartició de dades entre els nodes de la infraestructura. Aquestes dades són potencialment sensibles i de caràcter privat i el runtime les exposa a possibles atacs que les transfereix per la xarxa. Per tal de protegir l'usuari de possibles fuites de dades, el runtime ha de dotar les comunicacions amb confidencialitat, integritat i autenticitat. En el cas extrem en que un error de xarxa aïlli el dispositiu mòbil dels nodes remots, el runtime ha d'assegurar que l'execució continua i que eventualment l'usuari rebrà el resultat esperat fins i tot en cas de que la connexió no és restableixi mai. El mòbil ha de ser capaç d'executar l'aplicació utilitzant únicament les dades i recursos disponibles en aquell moment, la qual cosa pot forçar la re-execució d'algunes tasques ja calculades en els recursos remots. Els recursos remots han de continuar l'execució per tal que en cas de reconnexió, ambdues parts sincronitzin el seu progrés i es minimitzi l'impacte de la desconnexió.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.