We introduce the CAL model (Category Abstraction Learning), a cognitive framework formally describing category learning built on similarity-based generalization, dissimilarity-based abstraction, two attention learning mechanisms, error-driven knowledge structuring and stimulus memorization. Our hypotheses draw on an array of empirical and theoretical insights connecting reinforcement and category learning, and working memory. The key novelty of the model is its explanation of how rules are learned from scratch based on three central assumptions. (1) Category rules emerge from two processes of stimulus generalization (similarity) and its direct inverse (category contrast) on independent dimensions. (2) Two attention mechanisms guide learning by focusing on rules, or on the contexts in which they produce errors. (3) Knowing about these contexts inhibits executing the rule, without correcting it, and consequently leads to applying partial rules in different situations. We show that the model decisively outperforms the established category-learning models ALCOVE (Kruschke, 1992), SUSTAIN (Love, Medin, & Gureckis, 2004) and ATRIUM (Erickson & Kruschke, 1998) on data sets from benchmark studies, including cross-validations based on trial-wise eye-movements. Additionally, CAL's three free parameters, which measure abstraction, memorization and attention control, are related to abilities measured in working memory tasks in a theoretically meaningful way. We illustrate the model's explanatory scope by simulating several phenomena (peak shift, sample size, instruction effects, extrapolation), which were so far unexplained (or unexplained within a single model). We discuss CAL's relation to existing accounts, and its promise in understanding the role of attention control and working memory in category learning and related domains.