To support the large and various applications generated by the Internet of Things (IoT), Fog Computing was introduced to complement the Cloud Computing and offer Cloud-like services at the edge of the network with low latency and real-time responses. Large-scale, geographical distribution and heterogeneity of edge computational nodes make service placement in such infrastructure a challenging issue. Diversity of user expectations and IoT devices characteristics also complexify the deployment problem. This paper presents a survey of current research conducted on Service Placement Problem (SPP) in the Fog/Edge Computing. Based on a new classification scheme, a categorization of current proposals is given and identified issues and challenges are discussed.
In order to improve locality aspects, new Cloudrelated architectures such as Edge Computing have been proposed. Despite the growing popularity of these new architectures, their energy consumption has not been well investigated yet. To move forward on such a critical question, we first introduce a taxonomy of different Cloud-related architectures. From this taxonomy, we then present an energy model to evaluate their consumption. Unlike previous proposals, our model comprises the full energy consumption of the computing facilities, including cooling systems, and the energy consumption of network devices linking end users to Cloud resources. Finally, we instantiate our model on different Cloud-related architectures, ranging from fully centralized to completely distributed ones, and compare their energy consumption. The results show that a completely distributed architecture, because of not using intra-data center network and large-size cooling systems, consumes between 14% and 25% less energy than fully centralized and partly distributed architectures respectively. To the best of our knowledge, our work is the first one to propose a model that enables researchers to analyze and compare energy consumption of different Cloudrelated architectures.
Fog and Edge Computing infrastructure have been proposed to address the latency issue of the current Cloud Computing platforms. While a couple of works illustrated the advantages of these infrastructures in particular for the Internet of Things (IoT) applications, elementary Cloud services that can take advantage of the geo-distribution of resources have not been proposed yet. In this paper, we propose a first-class object store service for Fog/Edge facilities. Our proposal is built with Scale-out Network Attached Storage systems (NAS) and IPFS, a BitTorrent-based object store spread throughout the Fog/Edge infrastructure. Without impacting the IPFS advantages particularly in terms of data mobility, the use of a Scale-out NAS on each site reduces the inter-site exchanges that are costly but mandatory for the meta-data management in the original IPFS implementation. Several experiments conducted on Grid'5000 testbed are analyzed and confirmed, first, the benefit of using an object store service spread at the Edge and secondly, the importance of mitigating inter-site accesses. The paper concludes by giving few directions to improve the performance and fault tolerance criteria of our Fog/Edge Object Store Service. Inter Micro DCs latency LCore [50ms-100ms] Edge Frontier Edge Frontier Extreme Edge Frontier Domestic network Enterprise network Wired link Wireless link Cloud Computing Micro/Nano DC Hybrid network Cloud Latency LCloud ≃ 100ms Edge to Fog latency LFog [10-100ms]
SUMMARYOne of the principal goals of Cloud Computing is the outsourcing of the hosting of data and applications, thus enabling a per-usage model of computation. Data and applications may be packaged in virtual machines (VM), which are themselves hosted by nodes, i.e., physical machines (PM). Several frameworks have been designed to manage VMs on pools of PMs; most of them, however, do not efficiently address a major objective of cloud providers: maximizing system utilization while ensuring the quality of service (QoS). Several approaches promote virtualization capabilities to improve this trade-off. However, the dynamic scheduling of a large number of VMs as part of a large distributed infrastructure is subject to important and hard scalability problems that become even worse when VM image transfers have to be managed. Consequently, most current frameworks schedule VMs statically using a centralized control strategy. In this article, we present DVMS (Distributed VM Scheduler), a framework that enables VMs to be scheduled cooperatively and dynamically in large-scale distributed systems. We describe, in particular, how several VM reconfigurations can be dynamically calculated in parallel and applied simultaneously. Reconfigurations are enabled by partitioning the system (i.e. nodes and VMs) on the fly. Partitions are created with a minimum of resources necessary to find a solution to the reconfiguration problem. Moreover, we propose an algorithm to handle deadlocks that may appear because of the partitioning policy. We have evaluated our prototype through simulations and compared our approach to a centralized one. The results show that our scheduler permits VMs to be reconfigured more efficiently: the time needed to manage thousands of VMs on hundreds of machines is typically reduced to a tenth or less.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.