In IoT (Internet of Things) edge computing, task offloading can lead to additional transmission delays and transmission energy consumption. To reduce the cost of resources required for task offloading and improve the utilization of server resources, in this paper, we model the task offloading problem as a joint decision making problem for cost minimization, which integrates the processing latency, processing energy consumption, and the task throw rate of latency-sensitive tasks. The Online Predictive Offloading (OPO) algorithm based on Deep Reinforcement Learning (DRL) and Long Short-Term Memory (LSTM) networks is proposed to solve the above task offloading decision problem. In the training phase of the model, this algorithm predicts the load of the edge server in real-time with the LSTM algorithm, which effectively improves the convergence accuracy and convergence speed of the DRL algorithm in the offloading process. In the testing phase, the LSTM network is used to predict the characteristics of the next task, and then the computational resources are allocated for the task in advance by the DRL decision model, thus further reducing the response delay of the task and enhancing the offloading performance of the system. The experimental evaluation shows that this algorithm can effectively reduce the average latency by 6.25%, the offloading cost by 25.6%, and the task throw rate by 31.7%.
Unmanned Surface Vehicles (USVs) generate a large amount of data that needs to be processed in real time when they work, but they are usually limited by computational and battery resources, so they need to offload their tasks to the edge for processing. However, when numerous USVs offload their tasks to the edge nodes, some offloaded tasks may be thrown due to queuing timeouts. Existing task offloading methods generally consider the latency or the overall system energy consumption caused by the collaborative processing at the edge and end layers, and do not consider the wasted energy when the tasks are thrown. Therefore, to address the above situation, this paper establishes a task offloading model to minimize long-term task latency and energy consumption by jointly considering the requirments of latency and energy-sensitive tasks and the overall load dynamics in the cloud, edge, and end layers. A deep reinforcement learning (DRL)-based Task Offloading with Cloud Edge Jointly Load Balance Optimization algorithm (TOLBO) is proposed to select the best edge server or cloud server for offloading. Simulation results show that the algorithm can improve the utilization of energy consumption of the cloud edge nodes compared with other algorithms. At the same time, it can significantly reduce the task throw rate, average latency, and energy consumption of end devices. INDEX TERMS USV, mobile edge computing, offloading delay, energy consumption, DRL I. INTRODUCTION
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.