This paper focuses on a multi-agent zeroth-order online optimization problem in a federated learning setting for target tracking. The agents only sense their current distances to their targets and aim to maintain a minimum safe distance from each other to prevent collisions. The coordination among the agents and dissemination of collision-prevention information is managed by a central server using the federated learning paradigm. The proposed formulation leads to an instance of distributed online nonconvex optimization problem that is solved via a group of communication-constrained agents. To deal with the communication limitations of the agents, an error feedback-based compression scheme is utilized for agent-to-server communication. The proposed algorithm is analyzed theoretically for the general class of distributed online nonconvex optimization problems. We provide non-asymptotic convergence rates that show the dominant term is independent of the characteristics of the compression scheme. Our theoretical results feature a new approach that employs significantly more relaxed assumptions in comparison to standard literature. The performance of the proposed solution is further analyzed numerically in terms of tracking errors and collisions between agents in two relevant applications.
A novel Decentralized Noisy Model Update Tracking Federated Learning algorithm (FedNMUT) is proposed that is tailored to function efficiently in the presence of noisy communication channels that reflect imperfect information exchange. This algorithm uses gradient tracking to minimize the impact of data heterogeneity while minimizing communication overhead. The proposed algorithm incorporates noise into its parameters to mimic the conditions of noisy communication channels, thereby enabling consensus among clients through a communication graph topology in such challenging environments. FedNMUT prioritizes parameter sharing and noise incorporation to increase the resilience of decentralized learning systems against noisy communications. Theoretical results for the smooth non-convex objective function are provided by us, and it is shown that the ϵ−stationary solution is achieved by our algorithm at the rate of O 1 √ T , where T is the total number of communication rounds. Additionally, via empirical validation, we demonstrated that the performance of FedNMUT is superior to the existing state-of-the-art methods and conventional parameter-mixing approaches in dealing with imperfect information sharing. This proves the capability of the proposed algorithm to counteract the negative effects of communication noise in a decentralized learning framework.Imperfect information exchange, such as noisy or quantized communication, has been examined in the context of average consensus algorithms within distributed frameworks. Yet, the ramifications of varying noise levels remain underexplored. Moreover, existing research, primarily focused on consensus issues, does not fully address the complex challenges encountered in contemporary decentralized optimization and learning paradigms [23], [24]. In contrast to Federated Learning (FL), where server assistance is common, Decentralized Federated Learning (DFL) operates without a central server, with each client acting autonomously, processing local Stochastic Gradient Descent (SGD) or its variants on its data and interacting directly with neighboring clients.In our previous paper [25], we performed a comparative study of three proposed algorithms for DFL under imperfect communication conditions, typified by noisy channels. These algorithms-FedNDL1, FedNDL2, and FedNDL3-differ in their handling of noise and parameter sharing, demonstrating varying degrees of resilience to communication noise. In this paper, we propose a novel algorithm that employs the Gradient Tracking method in DFL and compare its performance against the previously mentioned algorithms. C. Paper's ContributionsThis paper introduces a novel algorithm that employs the Gradient Tracking method in DFL, considering the impact of communication noise. Previous studies have evaluated the effectiveness of two-time scale methods in DFL with noisy channels. However, these investigations were limited by inflexible assumptions such as strong convexity in papers such as [26]- [29]. These assumptions are rarely satisfied in practi...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.