The rapidly expanding number of IoT devices is generating huge quantities of data, but public concern over data privacy means users are apprehensive to send data to a central server for Machine Learning (ML) purposes. The easilychanged behaviours of edge infrastructure that Software Defined Networking provides makes it possible to collate IoT data at edge servers and gateways, where Federated Learning (FL) can be performed: building a central model without uploading data to the server. FedAvg is a FL algorithm which has been the subject of much study, however it suffers from a large number of rounds to convergence with non-Independent, Identically Distributed (non-IID) client datasets and high communication costs per round. We propose adapting FedAvg to use a distributed form of Adam optimisation, greatly reducing the number of rounds to convergence, along with novel compression techniques, to produce Communication-Efficient FedAvg (CE-FedAvg). We perform extensive experiments with the MNIST/CIFAR-10 datasets, IID/non-IID client data, varying numbers of clients, client participation rates, and compression rates. These show CE-FedAvg can converge to a target accuracy in up to 6× less rounds than similarly compressed FedAvg, while uploading up to 3× less data, and is more robust to aggressive compression. Experiments on an edge-computing-like testbed using Raspberry Pi clients also show CE-FedAvg is able to reach a target accuracy in up to 1.7× less real time than FedAvg.
Multi-access Edge Computing (MEC) is an emerging paradigm which utilizes computing resources at the network edge to deploy heterogeneous applications and services. In the MEC system, mobile users and enterprises can offload computation-intensive tasks to nearby computing resources to reduce latency and save energy. When users make offloading decisions, the task dependency needs to be considered. Due to the NP-hardness of the offloading problem, the existing solutions are mainly heuristic, and therefore have difficulties in adapting to the increasingly complex and dynamic applications. To address the challenges of task dependency and adapting to dynamic scenarios, we propose a new Deep Reinforcement Learning (DRL) based offloading framework, which can efficiently learn the offloading policy uniquely represented by a specially designed Sequence-to-Sequence (S2S) neural network. The proposed DRL solution can automatically discover the common patterns behind various applications so as to infer an optimal offloading policy in different scenarios. Simulation experiments were conducted to evaluate the performance of the proposed DRL-based method with different data transmission rates and task numbers. The results show that our method outperforms two heuristic baselines and achieves nearly optimal performance.
Content caching is a promising approach in edge computing to cope with the explosive growth of mobile data on 5G networks, where contents are typically placed on local caches for fast and repetitive data access. Due to the capacity limit of caches, it is essential to predict the popularity of files and cache those popular ones. However, the fluctuated popularity of files makes the prediction a highly challenging task. To tackle this challenge, many recent works propose learning based approaches which gather the users' data centrally for training, but they bring a significant issue: users may not trust the central server and thus hesitate to upload their private data. In order to address this issue, we propose a Federated learning based Proactive Content Caching (FPCC) scheme, which does not require to gather users' data centrally for training. The FPCC is based on a hierarchical architecture in which the server aggregates the users' updates using federated averaging, and each user performs training on its local data using hybrid filtering on stacked autoencoders. The experimental results demonstrate that, without gathering user's private data, our scheme still outperforms other learning-based caching algorithms such as m--greedy and Thompson sampling in terms of cache efficiency.
Localization techniques are becoming key to add location context to Internet of Things (IoT) data without human perception and intervention. Meanwhile, the newly-emerged Low-Power Wide-Area Network (LPWAN) and 5G technologies have become strong candidates for mass-market localization applications. However, various error sources have limited localization performance by using such IoT signals. This paper reviews the IoT localization system through the following sequence: IoT localization system review -localization data sourceslocalization algorithms -localization error sources and mitigation -localization performance evaluation. Compared to the related surveys, this paper has a more comprehensive and state-of-theart review on IoT localization methods, an original review on IoT localization error sources and mitigation, an original review on IoT localization performance evaluation, and a more comprehensive review of IoT localization applications, opportunities, and challenges. Thus, this survey provides comprehensive guidance for peers who are interested in enabling localization ability in the existing IoT systems, using IoT systems for localization, or integrating IoT signals with the existing localization sensors.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.