We consider multi-agent learning via online gradient descent (OGD) in a class of games called λ-cocoercive games, a broad class of games that admits many Nash equilibria and that properly includes strongly monotone games. We characterize the finite-time last-iterate convergence rate for joint OGD learning on λ-cocoercive games; further, building on this result, we develop a fully adaptive OGD learning algorithm that does not require any knowledge of the problem parameter (e.g., the cocoercive constant λ) and show, via a novel double-stopping-time technique, that this adaptive algorithm achieves the same finite-time last-iterate convergence rate as its non-adaptive counterpart. Subsequently, we extend OGD learning to the noisy gradient feedback case and establish last-iterate convergence results-first qualitative almost sure convergence, then quantitative finite-time convergence rates-all under non-decreasing step-sizes. These results fill in several gaps in the existing multi-agent online learning literature, where three aspects-finitetime convergence rates, non-decreasing stepsizes, and fully adaptive algorithms-have not been previously explored.