“…In the centralized setting, it has been established that various optimization methods including online gradient descent (Zinkevich, 2003), online dual averaging (Xiao, 2010), online mirror descent (Duchi et al, 2010), and many others (Shalev-Shwartz, 2012;Hazan, 2019) achieve an upper bound of O( √ T ) and O(log T ) on the static regret, for convex and strongly convex loss functions, respectively. The static regret of distributed online convex optimization algorithms have also been extensively studied in the literature (Hosseini et al, 2013;Mateos-Núnez and Cortés, 2014;Akbari et al, 2015;Tsianos and Rabbat, 2016;Lee et al, 2016;Yuan et al, 2020), where the same regret rates have been derived under similar convexity assumptions. However, it is not previously known whether similar results hold for the more useful dynamic regret.…”