Personalized Federated Learning (pFL), which utilizes and deploys distinct local models, has gained increasing attention in recent years due to its success in handling the statistical heterogeneity of FL clients. However, standardized evaluation and systematical analysis of diverse pFL methods remain a challenge. Firstly, the highly varied datasets, FL simulation settings and pFL implementations prevent fast and fair comparisons of pFL methods. Secondly, the effectiveness and robustness of pFL methods are under-explored in various practical scenarios, such as new clients generalization and resource-limited clients participation. Finally, the current pFL literature diverges in the adopted evaluation and ablation protocols. To tackle these challenges, we propose the first comprehensive pFL benchmark, pFL-Bench, for facilitating rapid, reproducible, standardized and thorough pFL evaluation. The proposed benchmark contains more than 10 datasets in diverse application domains with unified data partition and realistic heterogeneous settings; a modular and easy-to-extend pFL codebase with more than 20 competitive pFL baseline implementations; and systematic evaluations under containerized environments in terms of generalization, fairness, system overhead, and convergence. We highlight the benefits and potential of state-of-the-art pFL methods and hope pFL-Bench enables further pFL research and broad applications that would otherwise be difficult owing to the absence of a dedicated benchmark.
Although remarkable progress has been made by the existing federated learning (FL) platforms to provide fundamental functionalities for development, these platforms cannot well tackle the challenges brought by the heterogeneity of FL scenarios from both academia and industry. To fill this gap, in this paper, we propose a flexible federated learning platform, named FederatedScope, for handling various types of heterogeneity in FL. Considering both flexibility and extensibility, FederatedScope adopts an event-driven architecture to frame an FL course into event-handler pairs: the behaviors of participants are described in handlers, and triggered by events of message passing or meeting certain conditions in training. For a new FL application, developers only need to specify the adopted FL algorithm by defining new types of events and the corresponding handling functions based on participants' behaviors, which would be automatically executed in an asynchronous way for balancing effectiveness and efficiency in FederatedScope. Meanwhile, towards an easy-to-use platform, FederatedScope provides rich built-in algorithms, including personalization, federated aggregation, privacy protection, and privacy attack, for users to conveniently customize participant-specific training, fusing, aggregating, and protecting. Besides, a federated hyperparameter optimization module is integrated into FederatedScope for users to automatically tune their FL systems for resolving the unstable issues brought by heterogeneity. We conduct a series of experiments on the provided easy-to-use and comprehensive FL benchmarks to validate the correctness and efficiency of FederatedScope. We have released FederatedScope for users on https://github.com/alibaba/FederatedScope to promote research and industrial deployment of federated learning in a variety of real-world applications.
Large language models (LLMs) have demonstrated great capabilities in various natural language understanding and generation tasks. Platforms such as Hugging Face facilitate access and utilization of the pre-trained LLMs for different entities, ranging from computer science researchers to users with little machine learning background. Different entities can further improve the performance of those LLMs on their specific downstream tasks by fine-tuning LLMs. When several entities have similar interested tasks, but their local data cannot be shared directly because of privacy concerns regulations, federated learning (FL) is a mainstream solution to leverage the data of different entities. Besides avoiding direct data sharing, FL can also achieve rigorous data privacy protection, model intelligent property protection, and model customization via composition with different techniques. However, fine-tuning LLMs in federated learning settings still lacks adequate support from the existing FL frameworks because it has to deal with optimizing the consumption of significant communication and computational resources, various data preparation for different tasks, and distinct information protection demands. This paper first discusses these challenges of federated fine-tuning LLMs in detail, and introduces our implemented package FederatedScope-LLM (FS-LLM) as a main contribution, which consists of the following components: (1) we build a complete end-to-end benchmarking pipeline, automizing the processes of dataset preprocessing, federated fine-tuning execution or simulation, and performance evaluation on federated LLM fine-tuning with different capability demonstration purposes; (2) we provide comprehensive and off-the-shelf federated parameterefficient fine-tuning (PEFT) algorithm implementations and versatile programming interfaces for future extension to enhance the capabilities of LLMs in FL scenarios with low communication and computation costs, even without accessing the full model (e.g., closed-source LLMs); (3) we adopt several accelerating operators and resource-efficient operators for fine-tuning LLMs with limited resources and the flexible pluggable sub-routines for interdisciplinary study (e.g., LLMs in personalized FL). We conduct extensive and reproducible experiments to validate the effectiveness of FS-LLM and benchmark advanced LLMs with state-of-theart parameter-efficient fine-tuning algorithms in a federated setting, which also yields many valuable insights into federated fine-tuning LLMs for the research community. To facilitate further research and adoption, we release FS-LLM at https://github.com/alibaba/FederatedScope/tree/llm. 1
Although remarkable progress has been made by existing federated learning (FL) platforms to provide infrastructures for development, these platforms may not well tackle the challenges brought by various types of heterogeneity. To fill this gap, in this paper, we propose a novel FL platform, named FederatedScope, which employs an event-driven architecture to provide users with great flexibility to independently describe the behaviors of different participants. Such a design makes it easy for users to describe participants with various local training processes, learning goals and backends, and coordinate them into an FL course with synchronous or asynchronous training strategies. Towards an easy-to-use and flexible platform, FederatedScope enables rich types of plug-in operations and components for efficient further development, and we have implemented several important components to better help users with privacy protection, attack simulation and auto-tuning. We have released FederatedScope at https://github.com/alibaba/FederatedScope to promote academic research and industrial deployment of federated learning in a wide range of scenarios.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.