Search citation statements
Paper Sections
Citation Types
Year Published
Publication Types
Relationship
Authors
Journals
Federated learning represents a decentralized approach to machine learning, enabling numerous devices to collaboratively contribute to model training while ensuring the privacy of individual data. However, the existing incentive mechanism of hierarchical federated learning (HFL) only considers the data contribution of a single round, which needs to be revised. For non‐IID data sets, the continuous selection of any end devices will cause the weights to diverge in a specific direction. Therefore, a new metric is needed to avoid continuously selecting a certain end device to ensure the overall effectiveness. We introduce a metric to describe the importance of updates: age of update (AoU), which can help select end devices not selected in the previous round to promote a faster model convergence. We put forward an incentive mechanism based on AoU, reputation, and data quantity in HFL (ARDHFL). We have derived the optimal equilibrium solution for the three‐stage Stackelberg game. Based on this solution, we can ensure maximum edge‐cloud utility while incentivizing end devices to engage actively in HFL tasks and providing superior data to train the HFL model. Finally, we conducted extensive experiments to prove that ARDHFL can effectively improve the performance. Compared with the fixed scheme, random scheme, FMore and InFEDge, the testing accuracy of ARDHFL in the MNIST dataset has been improved by 29.7%, 9.3%, 6.8% and 6.1%, respectively. In the CIFAR‐10 dataset, it has been improved by 40.2%, 33.1%, 16.4% and 14.2%, respectively, and demands fewer communication iterations to achieve the same testing accuracy.
Federated learning represents a decentralized approach to machine learning, enabling numerous devices to collaboratively contribute to model training while ensuring the privacy of individual data. However, the existing incentive mechanism of hierarchical federated learning (HFL) only considers the data contribution of a single round, which needs to be revised. For non‐IID data sets, the continuous selection of any end devices will cause the weights to diverge in a specific direction. Therefore, a new metric is needed to avoid continuously selecting a certain end device to ensure the overall effectiveness. We introduce a metric to describe the importance of updates: age of update (AoU), which can help select end devices not selected in the previous round to promote a faster model convergence. We put forward an incentive mechanism based on AoU, reputation, and data quantity in HFL (ARDHFL). We have derived the optimal equilibrium solution for the three‐stage Stackelberg game. Based on this solution, we can ensure maximum edge‐cloud utility while incentivizing end devices to engage actively in HFL tasks and providing superior data to train the HFL model. Finally, we conducted extensive experiments to prove that ARDHFL can effectively improve the performance. Compared with the fixed scheme, random scheme, FMore and InFEDge, the testing accuracy of ARDHFL in the MNIST dataset has been improved by 29.7%, 9.3%, 6.8% and 6.1%, respectively. In the CIFAR‐10 dataset, it has been improved by 40.2%, 33.1%, 16.4% and 14.2%, respectively, and demands fewer communication iterations to achieve the same testing accuracy.
ContextThe rise of artificial intelligence of things (AloT) has enabled smart cities and industries, and UAV‐assisted edge computing networks are an important technology to support the above scenarios. UAV‐assisted refers to leveraging UAVs as a dynamic, flexible infrastructure to assist edge network data processing and communication tasks. Multiple UAVs can use their own resources, and collaborate edge servers to train artificial intelligence (Al) models.ObjectiveCompared with cloud‐based collaborative computing scenarios, UAV‐assisted edge collaborative learning can reduce training and inference delays and improve user satisfaction. However, UAV‐assisted edge networks scenario brings new challenges in terms of transmission burden and energy consumption.MethodThis paper proposes a prototype‐based joint optimization and training software system. The system consists of an optimization module and a training module. The optimization module first models an optimization problem including energy consumption and prototype error. Then it solves the optimization problem by problem transformation and plans the location of each UAV given the objects' position. After UAVs fly to the designated area and complete data collection, UAVs and the edge server train a model according to the proposed prototype‐based collaborative training module. Our training module enables multiple UAVs and an edge server to collaboratively train a model by lightweight prototype transmission and prototype aggregation. We also prove the convergence of the proposed collaborative training method.ResultsResults show our method reduces prototype error and energy consumption by at least 12.31% and improves model accuracy by 3.62% with a little communication burden.ConclusionFinally, we verify system performance through experiments.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.