The Implicit Association Test [IAT], like many behavioral measures, seeks to quantify meaningful individual differences in cognitive processes that are difficult to assess with approaches like self-reports. However, much like other behavioral measures, the IAT appears to show low test-retest reliability and typical scoring methods fail to quantify all of the decision-making processes that generate the overt task performance. Here, we develop a new modeling approach for the IAT, the CAVEAT model, that leverages both response times and accuracy on the task to make inferences about representational similarity between the stimuli and categories, as in computational linguistic models of representation. The model disentangles processes related to cognitive control, stimulus encoding, associations between concepts and categories, and processes unrelated to the choice itself. This approach to analyzing IAT data illustrates that the unreliability in the IAT is almost entirely attributable to the methods used to analyze data from the task: the model parameters show test-retest reliability around .8-.9, on par with that of many of the most reliable self-report measures. Furthermore, we demonstrate how model parameters are better and more unbiased compared to the IAT D-score in predicting outcomes related to intergroup contact and motivation. Put together, the model provides much greater reliability, discriminant and predictive validity, and the ability to make inferences about processes like associations and response caution that are not otherwise possible. We conclude by reviewing new, model-based insights about the IAT related to awareness, strategic caution, faking, and the role of associations in decision-making.