With the combination of Mobile Edge Computing (MEC) and the next generation cellular networks, computation requests from end devices can be offloaded promptly and accurately by edge servers equipped on Base Stations (BSs). However, due to the densified heterogeneous deployment of BSs, the end device may be covered by more than one BS, which brings new challenges for offloading decision, that is whether and where to offload computing tasks for low latency and energy cost. This paper formulates a multi-user-to-multi-servers (MUMS) edge computing problem in ultra-dense cellular networks. The MUMS problem is divided and conquered by two phases, which are server selection and offloading decision. For the server selection phases, mobile users are grouped to one BS considering both physical distance and workload. After the grouping, the original problem is divided into parallel multi-user-to-one-server offloading decision subproblems. To get fast and near-optimal solutions for these subproblems, a distributed offloading strategy based on a binary-coded genetic algorithm is designed to get an adaptive offloading decision. Convergence analysis of the genetic algorithm is given and extensive simulations show that the proposed strategy significantly reduces the average latency and energy consumption of mobile devices. Compared with the state-of-the-art offloading researches, our strategy reduces the average delay by 56% and total energy consumption by 14% in the ultra-dense cellular networks.
Data storage optimizations (DS, e.g. low latency for data access) in data center networks(DCN) are difficult online-making problems. Previously, they are done with heuristics under static network models which highly rely on designers' understanding of the environment. Encouraged by recent successes in deep reinforcement learning techniques to solve intricate online assignment problems, we propose to use the Q-learning (QL) technique to train and learn from historical DS decisions, which can significantly reduce the data access delay. However, QL faces two challenges to be widely used in data centers. They are massive input data and the blindness on parameter settings which severely hamper the convergence of the learning process. To solve these two key problems, we develop an evolutionary QL scheme, named as LFDS (Low latency and Fast convergence Data Storage). In the initial stage of the LFDS, the input matrix of QL is sparse to shrink the dimensionality of the massive input data while retaining its information as much as possible. In the following training phase, a specialized neural network is adopted to achieves a quick approximation. To overcome the blindness during QL training, the two key parameters, learning rate, and discount rate are carefully tested with real data input and network architecture. The preferred range of learning rate and discount rate are recommended for the use of QL in data centers, which brings high training rewards and fast convergence. Extensive simulations with real-world data show that the data access latency is decreased by 23.5% and the convergence rate is increased by 15%. INDEX TERMS Data center networks, data access, latency, reinforcement learning, q-learning.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.