We provide an axiomatic analysis of dynamic random utility, characterizing the stochastic choice behavior of agents who solve dynamic decision problems by maximizing some stochastic process (Ut) of utilities. We show first that even when (Ut) is arbitrary, dynamic random utility imposes new testable across‐period restrictions on behavior, over and above period‐by‐period analogs of the static random utility axioms. An important feature of dynamic random utility is that behavior may appear history‐dependent, because period‐t choices reveal information about Ut, which may be serially correlated; however, our key new axioms highlight that the model entails specific limits on the form of history dependence that can arise. Second, we show that imposing natural Bayesian rationality axioms restricts the form of randomness that (Ut) can display. By contrast, a specification of utility shocks that is widely used in empirical work violates these restrictions, leading to behavior that may display a negative option value and can produce biased parameter estimates. Finally, dynamic stochastic choice data allow us to characterize important special cases of random utility—in particular, learning and taste persistence—that on static domains are indistinguishable from the general model.