Recently, there has been an increasing interest in studying large-scale distributed systems in the control community. Efforts have been invested in developing several techniques wishing to address the main challenges found in this kind of problems, for instance, the amount of information to guarantee the proper operation of the system and the economic costs associated to the required communication structure. Moreover, another issue appears when there is a large amount of required data to control the system. The measuring and transmission processes, and the computation of the control inputs make closed-loop systems suffer from high computational burden.One way to overcome such problems is to consider the use of multi-agent systems framework, which may be cast in game-theoretical terms. Game theory studies interactions between self-interested agents. Particularly, this theory tackles with the problem of interaction between agents using different strategies that wish to maximize their welfares. For instance in [1], the authors provide connections between games, optimization, and learning for signal processing in networks. Other approaches in terms of learning and games can be found in [2]. In [3], distributed computation algorithms are developed based on generalized convex games that do not require full information, and where there is a dynamic change in terms of network topologies. Applications of game theory in control of optical networks and game-theoretic methods for smart grids are described in [4], [5], [6]. Another approach in game theoretical methods is to design protocols or mechanisms that possess some desirable properties [7]. This approach leads to a broad analysis of multi-agent interactions, particularly those involving negotiation and coordination problems [8]. Other game-theoretical applications to engineering are reported in [9].From a game-theoretical perspective, it can be distinguished among three types of games: matrix games, continuous games, and differential/dynamic games (see "The relationship among matricial games, full-potential population games, and resource allocation problems"). In matrix games (that generally use the normal form), individuals play simultaneously and only once, and the decision is mainly based on a static terms. In this case, players or agents are individually treated. Differently, in continuous games, players have infinitely many pure strategies [10], [11]. On the other hand, in dynamic games it is assumed that players can use some type of learning mechanism that allows them to adjust the actions taken based on their past decisions. Basically, it can be said that dynamic games are characterized by three main problems: i) how to model the environment where players interact; ii) how to model the objectives of players; and iii) how to specify the order in which the actions are taken and how much information each player possesses. In this case, it is assumed that there are interactions between large number of agents (which are usually unknown) [12].Among these dynamic game...