Adaptive online optimization algorithms, such as ADAM, RMSPROP, and ADABOUND, have recently been tremendously popular as they have been widely applied to address the issues in the field of deep learning. Despite their prevalence and prosperity, however, it is rare to investigate the distributed versions of these adaptive online algorithms. To fill the gap, a distributed online adaptive subgradient learning algorithm over time-varying networks, called DADAXBOUND, which exponentially accumulates long-term past gradient information and possesses dynamic bounds of learning rates under learning rate clipping is developed. Then, the dynamic regret bound of DADAXBOUND on convex and potentially nonsmooth objective functions is theoretically analysed. Finally, numerical experiments are carried out to assess the effectiveness of DADAXBOUND on different datasets. The experimental results demonstrate that DADAXBOUND compares favourably to other competing distributed online optimization algorithms.