“…In the traditional FL, every participant owns its local training dataset, and updates the same global model π€ π via a parameter server using its local model/gradients. The local gradients βπΉ (π€ π , π π,π ) can be protected via either secure aggregation [4,30,31,34,40,47] or differential privacy mechanisms [2,8,21,42,44,48]. This process can be decentralized by replacing the parameter server with a peer-to-peer communication mechanism [22,39].…”