Federated learning is a semi-distributed algorithm, where a server communicates
with multiple dispersed clients to learn a global model. The federated
architecture is not robust and is sensitive to communication and computational
overloads due to its one-master multi-client structure. It can also be subject to
privacy attacks targeting personal information on the communication links. In
this work, we introduce graph federated learning (GFL), which consists of
multiple federated units connected by a graph. We then show how graph
homomorphic perturbations can be used to ensure the algorithm is di erentially
private on the server level. While on the client level, we show that improvement
in the di erentially private federated learning algorithm can be attained through
the addition of random noise to the updates, as opposed to the models. We
conduct both convergence and privacy theoretical analyses and illustrate
performance by means of computer simulations.