In the near future, the number of Internet connected objects is expected to be between 26-50 billion devices. This figure is expected to grow even further due to the production of miniaturized portable devices that are lightweight, energy, and cost-efficient. In this paper, the entire IoT-fog-cloud architecture is modeled, the service placement problem is formulated using Mixed Integer Linear Programming (MILP) and the total power consumption is jointly minimized for processing and networking. We evaluate the distributed processing paradigm for both the un-capacitated and capacitated design settings in order to provide solutions for the long-term and short-term basis, respectively. Furthermore, four aspects of the IoT processing placement problem are examined: 1) IoT services with non-splittable tasks, 2) IoT services with splittable tasks, 3) impact of processing overheads needed for inter-service communication and 4) deployment of special-purpose data centers (SP-DCs) as opposed to the conventional general-purpose data center (GP-DC) in the core network. The results showed that for the capacitated problem, task splitting introduces power savings of up to 86% compared to 46% with non-splittable tasks. Moreover, it is observed that the overheads due to inter-service communication greatly impacts the total number of splits. However much insignificant the overhead factor, the results showed that this is not a trivial matter and hence much attention needs to be paid to this area to make the best use of the resources that are available at the edge of the network.
With the rapid proliferation of connected devices in the Internet of Things (IoT), the centralized cloud solution faces several challenges, out of which, there is an overwhelming consensus to put energy efficiency at the top of the research agenda. In this paper, we evaluate the impact of demand splitting over heterogeneous processing resources in an IoT platform, supported by Fog and Cloud infrastructure. We develop a Mixed Integer Linear Programming (MILP) model to study the gains of splitting resource intensive demands among IoT nodes, Fog devices and Cloud servers. A surveillance application is considered, which consists of multiple smart cameras capable of capturing and analyzing real-time video streams. The PON access network aggregates IoT layer demands for processing in the Fog, or the Cloud which is accessed through the IP/WDM network. For typical video analysis workloads, the results show that splitting medium demand sizes among IoT and Fog resources yields a total power consumption saving of up to 32%, even if they can host only 10% of the total workload and this can reach 93% for lower number of demands, compared to the centralized cloud solution. However, the gains in power savings from splitting decreases as the number of splits increases.
The Internet of Things (IoT) networks are expected to involve myriad of devices, ranging from simple sensors to powerful single board computers and smart phones. The great advancement in computational power of embedded technologies have enabled the integration of these devices into the IoT network, allowing for cloud functionalities to be extended near to the source of data. In this paper we study a multi-layer distributed IoT architecture supported by fog and cloud. We optimize the placement of the IoT services in this architecture so that the total power consumption is minimized. Our results show that, introducing local computation at the IoT layer can bring up to 90% power savings compared with general purpose servers in a central cloud.
In the near future, IoT based application services are anticipated to collect massive amounts of data on which complex and diverse tasks are expected to be performed. Machine learning algorithms such as Artificial Neural Networks (ANN) are increasingly used in smart environments to predict the output for a given problem based on a set of tuning parameters as the input. To this end, we present an energy efficient neural network (EE-NN) service embedding framework for IoT based smart homes. The developed framework considers the idea of Service Oriented Architecture (SOA) to provide service abstraction for multiple complex modules of a NN which can be used by a higher application layer. We utilize Mixed Integer Linear Programming (MILP) to formulate the embedding problem to minimize the total power consumption of networking and processing simultaneously. The results of the MILP model show that our optimized NN can save up to 86% by embedding processing modules in IoT devices and up to 72% in fog nodes due to the limited capacity of IoT devices.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.