Facilitating the revolution for smarter cities, vehicles are getting smarter and equipped with more resources to go beyond transportation functionality. On-Board Units (OBU) are efficient computers inside vehicles that serve safety and non-safety based applications. However, much of these resources are underutilised. On the other hand, more users are relying now on cloud computing which is becoming costly and energy consuming. In this paper, we develop a Mixed Integer linear Programming (MILP) model that optimizes the allocation of processing demands in an architecture that encompasses the vehicles, edge and cloud computing with the objective of minimizing power consumption. The results show power savings of 70%-90% compared to conventional clouds for small demands. For medium and large demand sizes, the results show 20%-30% power saving as the cloud was used partially due to capacity limitations on the vehicular and edge nodes. INTROUCTIONEnd users are growing more dependent on cloud services and data centers [1]. As the demand on cloud services grows higher, the data centers, as expected, tend to grow even bigger and more expensive in term of both monetary cost and energy consumptions. The energy consumption of clouds and data centers is contributing much to the total cost and power consumption in the Information and Communication Technology (ICT) field. That is why a lot of effort is being put forward now to explore alternatives that are more energy efficient and still as powerful [2][3][4][5][6][7][8][9][10][11]. One approach that is being actively evaluated is distributed service providers or the installation of mini data centers close to end users' level. In [12] data processing is done at different layers of the network and not only in the core cloud through optimized placement of Virtual Machines (VM) in IoT devices. Comparison between centralized data centers and nano data centers, to show the validity of the small data centers and its impacting factors, was carried out in [13]. The work in [14] analysed the energy consumption and latency of computation offloading in mobile clouds.Modern vehicles are increasingly being viewed as smart machines with plenty of computing resources. Research in the area of vehicular networks is very promising and it varies from Internet of Vehicles (IoV) [15] to Vehicular Clouds to VaaR (vehicle as a Resource) [16]. Our work presents an end-to-end architecture that uses vehicular and edge computing as the first level of processing resources. It compares this architecture with conventional clouds from an energy consumption point of view. For the remainder or the paper, Section 2 presents the proposed architecture. Section 3 discusses the optimization model and its results, and in Section 4 the paper is concluded.
The introduction of cloud data centres has opened new possibilities for the storage and processing of data, augmenting the limited capabilities of peripheral devices. Large data centres tend to be located away from the end users, which increases latency and power consumption in the interconnecting networks. These limitations led to the introduction of edge processing where small-distributed data centres or fog units are located at the edge of the network close to the end user. Vehicles can have substantial processing capabilities, often unused , in their onboard-units (OBUs). These can be used to augment the network edge processing capabilities. In this paper, we extend our previous work and develop a mixed integer linear programming (MILP) formulation that optimizes the allocation of networking and processing resources to minimize power consumption. Our edge processing architecture includes vehicular processing nodes, edge processing and cloud infrastructure. Furthermore, in this paper our optimization formulation includes delay. Compared to power minimization, our new formulation reduces delay significantly, while resulting in a very limited increase in power consumption.
No abstract
Modern vehicles equipped with on-board units (OBU) are playing an essential role in the emerging smart city revolution. The vehicular processing resources, however, are not used to their full potential. The concept of vehicular clouds is proposed to exploit the underutilized vehicular resources to supplement cloud computing services to relieve the burden on centralized cloud data centers and improve quality of service. In this paper we introduce a vehicular cloud architecture supported by fixed edge computing nodes and a central cloud data center. A mixed integer linear programming (MILP) model is developed to optimize the allocation of the processing demands in the distributed architecture while minimizing the overall power consumption. The results show power savings as high as 84% compared to processing in the conventional cloud. A heuristic algorithm with performance approaching that of the MILP model is developed to validate the MILP model and allocate processing demands in real time.INDEX TERMS vehicular clouds, edge computing, fog, power optimization, distributed processing, MILP.
No abstract
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.