Abstract-In the last years, traffic over wireless networks has been increasing exponentially due to the impact of Internet of Things (IoT). IoT is transforming a wide range of services in different domains of urban life, such as environmental monitoring, home automation and public transportation. The so-called Smart City applications will introduce a set of stringent requirements, such as low latency and high mobility, since services must be allocated and instantiated on-demand simultaneously close to multiple devices at different locations. Efficient resource provisioning functionalities are needed to address these demanding constraints introduced by Smart City applications while minimizing resource costs and maximizing Quality of Service (QoS). In this article, the City of Things (CoT) framework is presented, which provides not only data collection and analysis functionalities but also automated resource provisioning mechanisms for future Smart City applications. CoT is deployed as a Smart City testbed in Antwerp (Belgium) that allows researchers and developers to easily setup and validate IoT experiments. A Smart City use case based on Air Quality Monitoring through the deployment of air quality sensors in moving cars has been presented showing the full applicability of the CoT framework for a flexible and scalable resource provisioning in the Smart City ecosystem.
Vendor lock-in is one of the major issues preventing companies from moving their big data applications to the cloud or changing between cloud providers. A choice in provider based on used datastores can be advantageous at first, but with everchanging applications the chosen datastore may no longer be optimal after some time. Namely, applications' requirements change due to frequent updates and feature requests, and scalability issues arise as user numbers continuously evolve. In this paper we propose a framework for the live transformation of the schema and data of datastores. Using a canonical data model the framework can be easily extended for additional datastores. The framework performs the transformation on two different levels. It uses a batch layer to transform a snapshot of the datastore, while a speed layer transforms queries inserting new or updated data into the datastore. A transformation is given between MySQL and Cassandra as a proof-of-concept. We show the correctness of the transformation and provide performance results, in terms of transformation times and overhead.
SummaryLegacy applications have been built around the concept of storing their data in one relational data store. However, with the current differentiation in data store technologies as a consequence of the NoSQL paradigm, new and possibly more performant storage solutions are available to all applications. The concept of dynamic storage makes sure that application data are always stored in the most optimal data store at a given time to increase application performance. Additionally, polyglot persistence aims to push this performance even further by storing each different data type of an application in the data store technology best suited for it. To get legacy applications into dynamic storage and polyglot persistence, schema and data transformations between data store technologies are needed. This usually infers application redesigns as well to support the new data stores. This paper proposes such a transformation approach through a canonical model. It is based on the Lambda architecture to ensure no application downtime is needed during the transformation process, and after the transformation, the application can continue to query in the original query language, thus requiring no application code changes.
Big data applications have stringent service requirements for scalability and fault-tolerance and involve high volumes of data, high processing speeds and large varieties of database technologies. In order to test big data management solutions, large experimentation facilities are needed, which are expensive in terms of both resource cost and configuration time. This paper presents Tengu, an experimentation platform for big data applications that can automatically be instantiated on GENI (US federation of testbeds) and Fed4FIRE (EU federation of testbeds) compatible testbeds. Tengu allows for automatic deployments of several data processing, storage and cloud technologies, including Hadoop, Storm and OpenStack. The paper discusses the Tengu architecture, the Tengu-as-a-service approach and a demonstration of an automated instantiation of the Tengu experimentation suite on the Virtual Wall, a large-scale Emulab testbed at the iMinds research institute in Europe.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.