We propose a random adaptation variant of time-varying distributed averaging dynamics in discrete time. We show that this leads to novel interpretations of fundamental concepts in distributed averaging, opinion dynamics, and distributed learning. Namely, we show that the ergodicity of a stochastic chain is equivalent to the almost sure (a.s.) finite-time agreement attainment in the proposed random adaptation dynamics. Using this result, we provide a new interpretation for the absolute probability sequence of an ergodic chain. We then modify the base-case dynamics into a time-reversed inhomogeneous Markov chain, and we show that in this case ergodicity is equivalent to the uniqueness of the limiting distributions of the Markov chain. Finally, we introduce and study a time-varying random adaptation version of the Friedkin-Johnsen model and a rank-one perturbation of the base-case dynamics.