Most decision analyses include continuous uncertainties (e.g., oil in place, oil price, or porosity). Analysts are frequently concerned with how to best structure, compute, and communicate decision models under these circumstances. While decision trees are well suited for discrete random variables with a few possibilities, they become unmanageable for a large number of outcomes. To address this concern, analysts frequently use discrete approximations such as Swanson's Mean. In this case, one approximates a continuous probability distribution by weighting the P10-P50-P90 fractiles by 0.30-0.40-0.30. Unfortunately, this method, and others like it, significantly underestimate the mean, variance, and skewness of most distributions-especially the lognormal, where its use is common. In this paper, we compare different discretizations within the context of a value of information problem and document the degree of error induced. We find that the best discretization is dependent on the decision context, which is difficult to specify in advance. In addition, we contrast the use of discrete approximations to Monte Carlo simulation, which many view as being more accurate. One must keep in mind, however, that simulation induces sampling error, while discretizations induce approximation error. The question is how many Monte Carlo (MC) trials it takes such that these two errors are equivalent. We find that it takes thousands, perhaps tens of thousands, of MC trials to provide better results than simple discretization methods. This is quite important if one is using MC in connection with a model that takes a long time to compute a single realization.