Abstract-Edge computing has emerged as a new paradigm to bring cloud applications closer to users for increased performance. ISPs have the opportunity to deploy private edge-clouds in their infrastructure to generate additional revenue by providing ultra-low latency applications to local users. We envision a rapid increase in the number of such applications for "edge" networks in the near future with virtual/augmented reality (VR/AR), networked gaming, wearable cognitive assistance, autonomous driving and IoT analytics having already been proposed for edgeclouds instead of the central clouds to improve performance. This raises new challenges as the complexity of the resource allocation problem for multiple services with latency deadlines (i.e., which service to place at which node of the edge-cloud in order to satisfy the latency constraints) becomes significant. In this paper, we propose a set of practical, uncoordinated strategies for service placement in edge-clouds. Through extensive simulations using both synthetic and real-world trace data, we demonstrate that uncoordinated strategies can perform comparatively well with the optimal placement solution, which satisfies the maximum amount of user requests.
Information-Centric Networking (ICN) has been proposed as a promising solution for the Internet of Things (IoT), due to its focus on naming data, rather than endpoints, which can greatly simplify applications. The hierarchical naming of the Named-Data Networking (NDN) architecture can be used to name groups of data values, for example, all temperature sensors in a building. However, the use of a single naming hierarchy for all kinds of different applications is inflexible. Moreover, IoT data are typically retrieved from multiple sources at the same time, allowing applications to aggregate similar information items, something not natively supported by NDN. To this end, in this paper we propose (a) locating IoT data using (unordered) keywords combined with NDN names and (b) processing multiple such items at the edge of the network with arbitrary functions. We describe and evaluate three different strategies for retrieving data and placing the calculations in the edge IoT network, thus combining connectivity, storage and computing.
New and emerging applications in the entertainment (e.g., Virtual/Augmented Reality), IoT and automotive domains will soon demand response times an order of magnitude smaller than can be achieved by the current "client-to-cloud" network model. Edge-and Fog-computing have been proposed as the promise to deal with such extremely latency-sensitive applications. According to Edge-/Fog-Computing, computing resources are available at the edge of the network for applications to run their virtualised instances. We assume a distributed computing environment, where In-Network Computing Providers (INCPs) deploy and lease edge resources, while Application Service Providers (AppSPs) have the opportunity to rent those resources to meet their application's latency demands. We build an auctionbased resource allocation and provisioning mechanism which produces a map of application instances in the edge computing infrastructure (hence, acronymed Edge-MAP). Edge-MAP takes into account users' mobility (i.e., users connecting to different cell stations over time) and the limited computing resources available in edge micro-clouds to allocate resources to bidding applications. On the micro-level, Edge-MAP relies on Vickrey-English-Dutch (VED) auctions to perform robust resource allocation, while on the macro-level it fosters competition among neighbouring INCPs. In contrast to related studies in the area, Edge-MAP can scale to any number of applications, adapt to dynamic network conditions rapidly and reallocate resources in polynomial time. Our evaluation demonstrates Edge-MAP's capability of taking into account the inherent challenges of the provisioning problem we consider.
An increasing number of Low Latency Applications (LLAs) in the entertainment, IoT, and automotive domains require response times that challenge the traditional application provisioning using distant Data Centres. The fog computing paradigm extends cloud computing at the edge and middle-tier locations of the network, providing response times an order of magnitude smaller than those that can be achieved by the current "client-to-cloud" network model. Here, we address the challenges of provisioning heavily stateful LLA in the setting where fog infrastructure consists of third-party computing resources, i.e., cloudlets, that come in the form of "data centres in the box". We introduce FogSpot, a charging mechanism for on-path, on-demand, application provisioning. In FogSpot, cloudlets offer their resources in the form of Virtual Machines (VMs) via markets, collocated with the cloudlets, that interact with forwarded users' application requests for VMs in real time. FogSpot associates each cloudlet with a price based on applications' demand. The proposed mechanism's design takes into account the characteristics of cloudlets' resources, such as their limited elasticity, and LLAs' attributes, like their expected QoS gain and engagement duration. Lastly, FogSpot guarantees the end users' requests truthfulness while focusing in maximising either each cloudlet's revenue or resource utilisation.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.