The amount of data generated by sensors, actuators and other devices in the Internet of Things (IoT) has substantially increased in the last few years. IoT data are currently processed in the cloud, mostly through computing resources located in distant data centers. As a consequence, network bandwidth and communication latency become serious bottlenecks. This article advocates edge computing for emerging IoT applications that leverage sensor streams to augment interactive applications. First, we classify and survey current edge computing architectures and platforms, then describe key IoT application scenarios that benefit from edge computing. Second, we carry out an experimental evaluation of edge computing and its enabling technologies in a selected use case represented by mobile gaming. To this end, we consider a resource-intensive 3D application as a paradigmatic example and evaluate the response delay in different deployment scenarios. Our experimental results show that edge computing is necessary to meet the latency requirements of applications involving virtual and augmented reality. We conclude by discussing what can be achieved with current edge computing platforms and how emerging technologies will impact on the deployment of future IoT applications.
Large-scale Internet of Things (IoT) deployments demand long-range wireless communications, especially in urban and metropolitan areas. LoRa is one of the most promising technologies in this context due to its simplicity and flexibility. Indeed, deploying LoRa networks in dense IoT scenarios must achieve two main goals: efficient communications among a large number of devices and resilience against dynamic channel conditions due to demanding environmental settings (e.g., the presence of many buildings). This work investigates adaptive mechanisms to configure the communication parameters of LoRa networks in dense IoT scenarios. To this end, we develop FLoRa, an open-source framework for end-to-end LoRa simulations in OMNeT++. We then implement and evaluate the Adaptive Data Rate (ADR) mechanism built into LoRa to dynamically manage link parameters for scalable and efficient network operations. Extensive simulations show that ADR is effective in increasing the network delivery ratio under stable channel conditions, while keeping the energy consumption low. Our results also show that the performance of ADR is severely affected by a highly-varying wireless channel. We thereby propose an improved version of the original ADR mechanism to cope with variable channel conditions. Our proposed solution significantly increases both the reliability and the energy efficiency of communications over a noisy channel, almost irrespective of the network size. Finally, we show that the delivery ratio of very dense networks can be further improved by using a network-aware approach, wherein the link parameters are configured based on the global knowledge of the network.
The main drivers for the mobile core network evolution is to serve the future challenges and set the way to 5G networks with need for high capacity and low latency. Different technologies such as Network Functions Virtualization (NFV) and Software Defined Networking (SDN) are being considered to address the future needs of 5G networks. However, future applications such as Internet of Things (IoT), video services and others still unveiled will have different requirements, which emphasize the need for the dynamic scalability of the network functionality. The means for efficient network resource operability seems to be even more important than the future network element costs. This paper provides the analysis of different technologies such as SDN and NFV that offer different architectural options to address the needs of 5G networks. The options under consideration in this paper may differ mainly in the extent of what SDN principles are applied to mobile specific functions or to transport network functions only.
Long Range (LoRa) is a wireless communication standard specifically targeted for resource-constrained Internet of Things (IoT) devices. LoRa is a promising solution for smart city applications as it can provide long-range connectivity with a low energy consumption. The number of LoRa-based networks is growing due to its operation in the unlicensed radio bands and the ease of network deployments. However, the scalability of such networks suffers as the number of deployed devices increases. In particular, the network performance drops due to increased contention and interference in the unlicensed LoRa radio bands. This results in an increased number of dropped messages and, therefore, unreliable network communications. Nevertheless, network performance can be improved by appropriately configuring the radio parameters of each node. To this end, we formulate integer linear programming models to configure LoRa nodes with the optimal parameters that allow all devices to reliably send data with a low energy consumption. We evaluate the performance of our solutions through extensive network simulations considering different types of realistic deployments. We find that our solution consistently achieves a higher delivery ratio (up to 8% higher) than the state of the art with minimal energy consumption. Moreover, the higher delivery ratio is achieved by a large percentage of nodes in each network, thereby resulting in a fair allocation of radio resources. Finally, the optimal network configurations are obtained within a short time, usually much faster than the state of the art. Thus, our solution can be readily used by network operators to determine optimal configurations for their IoT deployments, resulting in improved network reliability.
Recent advancements in virtualization and software architecture have led to the new paradigm of serverless computing, which allows developers to deploy applications as stateless functions without worrying about the underlying infrastructure. Accordingly, a serverless platform handles the lifecycle, execution and scaling of the actual functions; these need to run only when invoked or triggered by an event. Thus, the major benefits of serverless computing are low operational concerns and efficient resource management and utilization. Serverless computing is currently offered by several public cloud service providers. However, there are certain limitations on the public cloud platforms, such as vendor lock-in and restrictions on the computation of the functions. Open source serverless frameworks are a promising solution to avoid these limitations and bring the power of serverless computing to on-premise deployments. However, these frameworks have not been evaluated before. Thus, we carry out a comprehensive feature comparison of popular open source serverless computing frameworks. We then evaluate the performance of selected frameworks: Fission, Kubeless and OpenFaaS. Specifically, we characterize the response time and ratio of successfully received responses under different loads and provide insights into the design choices of each framework.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.