Counting in natural language presupposes that we can successfully identify what counts as one, which, as we argue, relies on how and whether one can balance two pressures on learning nominal predicates, which we formalise in probabilistic and information theoretic terms: individuation (establishing a schema for judging what counts as one with respect to a predicate); and reliability (establishing a reliable criterion for applying a predicate). This hypothesis has two main consequences. First, the mass/count distinction in natural language is a complex phenomenon that is partly grounded in a theory of individuation, which we contend must integrate particular qualitative properties of entities, among which a key role is played by those that rely on our spatial perception. Second, it allows us to predict when we can expect the puzzling variation in mass/count lexicalization, cross-and intralinguistically: namely, exactly when the two learning pressures of individuation and reliability conflict.