The convergence of Markov chain-based Monte Carlo linear solvers using the Ulamvon Neumann algorithm for a linear system of the form x = Hx + b is investigated in this paper. We analyze the convergence of the Monte Carlo solver based on the original Ulam-von Neumann algorithm under the conditions that H < 1 as well as ρ(H) < 1, where ρ(H) is the spectral radius of H. We find that although the Monte Carlo solver is based on sampling the Neumann series, the convergence of Neumann series is not a sufficient condition for the convergence of the Monte Carlo solver. Actually, properties of H are not the only factors determining the convergence of the Monte Carlo solver; the underlying transition probability matrix plays an important role. An improper selection of the transition matrix may result in divergence even though the condition H < 1 holds. However, if the condition H < 1 is satisfied, we show that there always exist certain transition matrices that guarantee convergence of the Monte Carlo solver. On the other hand, if ρ(H) < 1 but H ≥ 1, the Monte Carlo linear solver may or may not converge. In particular, if the row sum N j=1 |H ij | > 1 for every row in H or, more generally, ρ(H +) > 1, where H + is the nonnegative matrix where H + ij = |H ij |, we show that transition matrices leading to convergence of the Monte Carlo solver do not exist. Finally, given H and a transition matrix P , denoting the matrix H * via H * ij = H 2 ij /P ij , we find that ρ(H *) < 1 is a necessary and sufficient condition for convergence of the Markov chain-based Monte Carlo linear solvers using the Ulam-von Neumann algorithm.