Computational offloading is a strategy by which mobile device (MD) users can access the superior processing power of a Multi-Access Edge Computing (MEC) server network. In this paper, we contribute a model of a system that consists of multiple MEC servers and multiple MD users. Each MD has multiple computational tasks to perform, and each task can either be computed locally on the MD, or it can be offloaded to one of the MEC servers. For this system and having global knowledge, we compute the theoretical optimal allocation that minimises the time required to complete the computation of all tasks. Subsequently, we contribute a distributed heuristic algorithm that allows each MD to independently, and using local knowledge only, decide how to handle each individual job. Furthermore, we propose three approaches to decide whether to offload each individual job, and three mechanisms to determine which MEC server each task should be offloaded to. We use simulations to evaluate those approaches in terms of how well they can approximate the theoretical optimum. The proposed heuristic algorithm is tested on a range of experiments, and the results demonstrate that the heuristic algorithm can produce reasonable quality solutions.
Major research efforts have been recently made to develop resource orchestration solutions to flexibly link edge nodes with centralised cloud resources so as to maximise the efficiency with which such a continuum of resources can be accessed by users. In this context, we consider the case of Big Data analytics in which total task completion time reductions can be achieved by routing tasks initially to edge servers and subsequently to cloud resources. We demonstrate that the task complexity of the computational jobs, the Wide Area Network (WAN) speed and the potential overload of edge servers (as reflected by CPU workloads) are crucial for achieving total task completion time reductions by offloading from edge to cloud resources. The edge-cloud orchestrators are situated in the edge nodes and, therefore, require continuous access to the parameters of WAN speeds (and their fluctuations), edge server CPU workload and the task complexities in Big Data analytics requirements to make accurate edge-to-cloud offloading decisions. With favourable values for these three parameters, large reductions in completion times can result from transfer of large-scale data from edge nodes to cloud resources, which can reduce the completion times by up to 97% and meet client deadlines for computational tasks with responsive and agile solutions.
Computation offloading plays a critical role in reducing task completion time for mobile devices. The advantages of computation offloading to cloud resources in Mobile Cloud Computing have been widely considered. In this paper, we have investigated different scenarios for offloading to less distant Multi-Access Edge Computing (MEC) servers for multiple users with a range of mobile devices and computational tasks. We present detailed simulation data for how offloading can be beneficial in a MEC network with varying quantitative mobile user demand, heterogeneity in mobile device on-board and MEC processor speeds, computational task complexity, communication speeds, link access delays and mobile device user numbers. Unlike previous work where simulations considered only limited communication speeds for offloading, we have extended the range of link speeds and included two types of communication delay. We find that more computationally complex applications are offloaded preferentially (especially with the higher server:mobile device processor speed ratios) while low link speeds and any delays caused by network delays or excessive user numbers degrade any advantages in reduced task completion times offered by offloading. Additionally, significant savings in energy usage by mobile devices are guaranteed except at very low link speeds.
In recent years, there has been considerable interest in computational offloading algorithms. The interest is mainly driven by the potential savings that offloading offers in task completion time and mobile device energy consumption. This paper builds on authors' previous work on computational offloading and describes a multi-objective optimization model that optimizes time and energy in a network with multiple Multi-Access Edge Computing servers (MECs) and Mobile Devices (MDs). Each MD has multiple computational jobs to process, and each task can be processed locally or offloaded to one of the MEC servers. Several heuristic offloading policies are proposed and tested with an objective function with a range of weightings for optimizing time and energy. The approaches are illustrated with the help of three test cases of varying complexity. The objective function shows a continuous variation as the emphasis is placed on either time or energy saving by the weighting factors. The numerical tests demonstrate that the proposed heuristic algorithms produce nearoptimal computational offloading solutions while considering a combined weighted score for schedule task completion time and energy
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.