We are interested in risk constraints for infinite horizon discrete time Markov decision processes (MDPs). Starting with average reward MDPs, we show that increasing concave stochastic dominance constraints on the empirical distribution of reward lead to linear constraints on occupation measures. The optimal policy for the resulting class of dominance-constrained MDPs is obtained by solving a linear program. We compute the dual of this linear program to obtain average dynamic programming optimality equations that reflect the dominance constraint. In particular, a new pricing term appears in the optimality equations corresponding to the dominance constraint. We show that many types of stochastic orders can be used in place of the increasing concave stochastic order. We also carry out a parallel development for discounted reward MDPs with stochastic dominance constraints. The paper concludes with a portfolio optimization example.Risk management for MDPs has been considered from many perspectives in the literature. [20] includes penalties for the variance of rewards in MDPs. The optimal policy is obtained by solving a nonlinear programming problem in occupation measures. In [37], the mean-variance trade-off in MDPs is further explored in a Pareto-optimality sense. The conditional value-at-risk of the total cost in a finite horizon MDPs is constrained in [4]. It is argued that convex analytic methods do not apply to this problem type and an offline iterative algorithm is employed to solve for the optimal policy. [35] develops Markov risk measures for finite horizon and infinite horizon discounted MDPs. Dynamic programming equations are derived that reflect the risk aversion, and policy iteration is shown to solve the infinite horizon problem.Our notion of risk constrained MDPs differs from this literature survey. We are interested in the empirical distribution of reward, rather than in its expectation, variance, or other summary statistics. Our approach is based on stochastic orders, which are partial orders on the space of random variables, see [33,36] for extensive monographs on stochastic orders. [9, 10] use the increasing concave stochastic order to define stochastic dominance constraints in single stage stochastic optimization. The increasing concave stochastic order is notable for its connection to risk-averse decision makers, i.e. it captures the preferences of all risk-averse decision makers. A benchmark random variable is introduced, and a concave random variable-valued mapping is constrained to dominate the benchmark in the increasing concave stochastic order. It is shown that increasing concave functions are the Lagrange multipliers of the dominance constraints. The dual problem is a search over a certain class of increasing concave functions, interpreted as utility functions, and strong duality is established. Stochastic dominance constraints are applied to finite horizon stochastic programming problems with linear system dynamics in [12]. Specifically, a stochastic dominance constraint is placed on a vec...