“…In the proposed personalized gradient tracking strategy, the dynamic gradient tracking update is interlaced with a learning mechanism to let each node learn the user's cost function U i (x), by employing noisy user's feedback in the form of a scalar quantity given by y i,t = U (x i,t ) + i,t , where x i,t is the local, tentative solution at time t and i,t is a noise term. It is worth pointing out that in this paper, we consider convex parametric models, instead of more generic non-parametric models, such as Gaussian Processes [12,[16][17][18][19][20][21], or convex regression [22,23]. The reasons for this choice stem from the fact that (i) user's functions are or can be often approximated as convex (see, e.g., [24,25] and references therein), which makes the overall optimization problem much easier to be solved; (ii) convex parametric models have better asymptotical rate bounds 2 than convex non-parametric models [22], which is fundamental when attempting at learning with scarce data; and (iii) a solid online theory already exists in the form of recursive least squares (RLS) [26][27][28][29][30][31].…”