In a mobile edge computing (MEC) network, mobile devices, also called edge clients, offload their computations to multiple edge servers that provide additional computing resources. Since the edge servers are placed at the network edge, e.g., cell-phone towers, transmission delays between edge servers and edge clients are shorter compared to those of cloud computing. In addition, edge clients can offload their tasks to other nearby edge clients with available computing resources by exploiting the Fog Computing (FC) paradigm. A major challenge in MEC and FC networks is to assign the tasks from edge clients to edge servers, as well as to other edge clients, in such a way that their tasks are completed with minimum energy consumption and minimum processing delay. In this paper, we model task offloading in MEC as a constrained multi-objective optimization problem (CMOP) that minimizes both the energy consumption and task processing delay of the mobile devices. To solve the CMOP, we design an evolutionary algorithm that can efficiently find a representative sample of the best trade-offs between energy consumption and task processing delay, i.e., the Pareto-optimal front. Compared to existing approaches for task offloading in MEC, we see that our approach finds offloading decisions with lower energy consumption and task processing delay.
With the advent of smart vehicles, several new latency-critical and data-intensive applications are emerged in Vehicular Networks (VNs). Computation offloading has emerged as a viable option allowing to resort to the nearby edge servers for remote processing within a requested service latency requirement. Despite several advantages, computation offloading over resource-limited edge servers, together with vehicular mobility, is still a challenging problem to be solved. In particular, in order to avoid additional latency due to out-of-coverage operations, Vehicular Users (VUs) mobility introduces a bound on the amount of data to be offloaded towards nearby edge servers. Therefore, several approaches have been used for finding the correct amount of data to be offloaded. Among others, Federated Learning (FL) has been highlighted as one of the most promising solving techniques, given the data privacy concerns in VNs and limited communication resources. However, FL consumes resources during its operation and therefore incurs an additional burden on resource-constrained VUs. In this work, we aim to optimize the VN performance in terms of latency and energy consumption by considering both the FL and the computation offloading processes while selecting the proper number of FL iterations to be implemented. To this end, we first propose an FL-inspired distributed learning framework for computation offloading in VNs, and then develop a constrained optimization problem to jointly minimize the overall latency and the energy consumed. An evolutionary Genetic Algorithm is proposed for solving the problem in-hand and compared with some benchmarks. The simulation results show the effectiveness of the proposed approach in terms of latency and energy consumption.
With the rapid advancement of Intelligent Transportation Systems (ITS) and vehicular communications, Vehicular Edge Computing (VEC) is emerging as a promising technology to support low-latency ITS applications and services. In this paper, we consider the computation offloading problem from mobile vehicles/users in a heterogeneous VEC scenario, and focus on the networkand base station selection problems, where different networks have different traffic loads. In a fast-varying vehicular environment, computation offloading experience of users is strongly affected by the latency due to the congestion at the edge computing servers co-located with the base stations. However, as a result of the non-stationary property of such an environment and also information shortage, predicting this congestion is an involved task. To address this challenge, we propose an on-line learning algorithm and an off-policy learning algorithm based on multi-armed bandit theory. To dynamically select the least congested network in a piece-wise stationary environment, these algorithms predict the latency that the offloaded tasks experience using the offloading history. In addition, to minimize the task loss due to the mobility of the vehicles, we develop a method for base station selection. Moreover, we propose a relaying mechanism for the selected network, which operates based on the sojourn time of the vehicles. Through intensive numerical analysis, we demonstrate that the proposed learning-based solutions adapt to the traffic changes of the network by selecting the least congested network, thereby reducing the latency of offloaded tasks. Moreover, we demonstrate that the proposed joint base station selection and the relaying mechanism minimize the task loss in a vehicular environment.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.