The purpose of this paper is to study the dynamical behavior of the sequence produced by a forward-backward algorithm, involving two random maximal monotone operators and a sequence of decreasing step sizes. Defining a mean monotone operator as an Aumann integral, and assuming that the sum of the two mean operators is maximal (sufficient maximality conditions are provided), it is shown that with probability one, the interpolated process obtained from the iterates is an asymptotic pseudo trajectory in the sense of Benaïm and Hirsch of the differential inclusion involving the sum of the mean operators. The convergence of the empirical means of the iterates towards a zero of the sum of the mean operators is shown, as well as the convergence of the sequence itself to such a zero under a demipositivity assumption. These results find applications in a wide range of optimization problems or variational inequalities in random environments.