To build intelligent model learning in conventional architecture, the local data are required to be transmitted toward the cloud server, which causes heavy backhaul congestion, leakage of personalization, and insufficient use of network resources. To address these issues, federated learning (FL) is introduced by offering a systematical framework that converges the distributed modeling process between local participants and the parameter server. However, the challenging issues of insufficient participant scheduling, aggregation policies, model offloading, and resource management still remain within conventional FL architecture. In this survey article, the state-of-the-art solutions for optimizing the orchestration in FL communications are presented, primarily querying the deep reinforcement learning (DRL)-based autonomy approaches. The correlations between the DRL and FL mechanisms are described within the optimized system architectures of selected literature approaches. The observable states, configurable actions, and target rewards are inquired into to illustrate the applicability of DRL-assisted control toward self-organizing FL systems. Various deployment strategies for Internet of Things applications are discussed. Furthermore, this article offers a review of the challenges and future research perspectives for advancing practical performances. Advanced solutions in these aspects will drive the applicability of converged DRL and FL for future autonomous communication-efficient and privacy-aware learning.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.