Bayesian theories of cognitive science hold that cognition is fundamentally probabilistic, but people's explicit probability judgments often violate the laws of probability. Two recent proposals, the "Probability Theory plus Noise" [PT+N; Costello and Watts (2014)] and "Bayesian Sampler" (Zhu, Sanborn, & Chater, 2020) theories of probability judgments, both seek to account for these biases while maintaining that mental credences are fundamentally probabilistic. These models differ in their averaged predictions about people's conditional probability judgments and in their distributional predictions about their overall patterns of judgments. In particular, the Bayesian Sampler's Bayesian adjustment process predicts a truncated range of responses as well as a correlation between the average degree of bias and variability trial-to-trial. However, exploring these distributional predictions with participants' raw responses requires a careful treatment of rounding errors and exogenous response processes. Here, I cast these theories into a Bayesian data analysis framework that supports the treatment of these issues along with principled model comparison using information criteria. Comparing the fits of both models on data collected by Zhu and colleagues (2020), I find these data are best explained by an account of biases based on "noise" in the sample-reading process but in which conditional probability judgments are produced by a process of conditioning in the mental model of the events, rather than in a two-stage mental sampling process as proposed by the PT+N model.