Infinite-horizon optimal control problems for nonlinear systems are studied and discussed. First, we thoroughly revisit the formulation of the underlying dynamic optimisation problem together with the classical results providing its solution. Then, we consider two alternative methods to construct solutions (or approximations thereof) of such problems, developed in recent years, that provide theoretical insights as well as computational benefits. While the considered methods are mostly based on tools borrowed from the theories of Dynamic Programming and Pontryagin's Minimum Principles, or a combination of the two, the proposed control design strategies yield innovative, systematic and constructive methods to provide exact or approximate solutions of nonlinear optimal control problems. Interestingly, similar ideas can be extended also to linear and nonlinear differential games, namely dynamic optimisation problems involving several decision-makers. Due their advantages in terms of computational complexity, the considered methods have found several applications. An example of this is provided, through the consideration of the multi-agent collision avoidance problem, for which both simulations and experimental results are provided.