Exponentially increasing data volumes, coupled with new modes of analysis have created significant new opportunities for data scientists. However, the stochastic nature of many data science techniques results in tradeoffs between costs and accuracy. For example, machine learning algorithms can be trained iteratively and indefinitely with diminishing returns in terms of accuracy. In this paper we explore the cost-accuracy tradeoff through three representative examples: we vary the number of models in an ensemble, the number of epochs used to train a machine learning model, and the amount of data used to train a machine learning model. We highlight the feasibility and benefits of being able to measure, quantify, and predict costaccuracy tradeoffs by demonstrating the presence and usability of these tradeoffs in two different case studies.