Technological advances in ad-hoc networking and the availability of low-cost reliable computing, data storage and sensing devices have made possible scenarios where the coordination of many subsystems extends the range of human capabilities. Smart grid operations, smart transportation, smart healthcare and sensing networks for environmental monitoring and exploration in hazardous situations are just a few examples of such network operations. In these applications, the ability of a network system to, in a decentralized fashion, fuse information, compute common estimates of unknown quantities, and agree on a common view of the world is critical. These problems can be formulated as agreement problems on linear combinations of dynamically changing reference signals or local parameters. This dynamic agreement problem corresponds to dynamic average consensus, which is the problem of interest of this article. The dynamic average consensus problem is for a group of agents to cooperate in order to track the average of locally available time-varying reference signals, where each agent is only capable of local computations and communicating with local neighbors, see Figure 1. Figure 1: A group of communication agents, each endowed with a time-varying reference signal. 1 arXiv:1803.04628v2 [cs.SY] 24 Nov 2018 Centralized solutions have drawbacksThe difficulty in the dynamic average consensus problem is that the information is distributed across the network. A straightforward solution, termed centralized, to the dynamic average consensus problem appears to be to gather all of the information in a single place, do the computation (in other words, calculate the average), and then send the solution back through the network to each agent. Although simple, the centralized approach has numerous drawbacks: (1) the algorithm is not robust to failures of the centralized agent (if the centralized agent fails, then the entire computation fails), (2) the method is not scalable since the amount of communication and memory required on each agent scales with the size of the network, (3) each agent must have a unique identifier (so that the centralized agent counts each value only once), (4) the calculated average is delayed by an amount which grows with the size of the network, and (5) the reference signals from each agent are exposed over the entire network which is unacceptable in applications involving sensitive data. The centralized solution is fragile due to existence of a single failure point in the network. This can be overcome by having every agent act as the centralized agent. In this approach, referred to as flooding, agents transmit the values of the reference signals across the entire network until each agent knows each reference signal. This may be summarized as "first do all communications, then do all computations". While flooding fixes the issue of robustness to agent failures, it is still subject to many of the drawbacks of the centralized solution. Also, although this approach works reasonably well for small size networks,...