Smart devices, such as smartphones, wearables, robots, and others, can collect vast amounts of data from their environment. This data is suitable for training machine learning models, which can significantly improve their behavior, and therefore, the user experience. Federated learning is a young and popular framework that allows multiple distributed devices to train deep learning models collaboratively while preserving data privacy. Nevertheless, this approach may not be optimal for scenarios where data distribution is non-identical among the participants or changes over time, causing what is known as concept drift. Little research has yet been done in this field, but this kind of situation is quite frequent in real life and poses new challenges to both continual and federated learning. Therefore, in this work, we present a new method, called Concept-Drift-Aware Federated Averaging (CDA-FedAvg). Our proposal is an extension of the most popular federated algorithm, Federated Averaging (FedAvg), enhancing it for continual adaptation under concept drift. We empirically demonstrate the weaknesses of regular FedAvg and prove that CDA-FedAvg outperforms it in this type of scenario.
Federated Learning is a novel framework that allows multiple devices or institutions to train a machine learning model collaboratively while preserving their data private. This decentralized approach is prone to suffer the consequences of data statistical heterogeneity, both across the different entities and over time, which may lead to a lack of convergence. To avoid such issues, different methods have been proposed in the past few years. However, data may be heterogeneous in lots of different ways, and current proposals do not always determine the kind of heterogeneity they are considering. In this work, we formally classify data statistical heterogeneity and review the most remarkable learning strategies that are able to face it. At the same time, we introduce approaches from other machine learning frameworks, such as Continual Learning, that also deal with data heterogeneity and could be easily adapted to the Federated Learning settings.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.