We argue that atomistic learning-learning that requires training only on a novel item to be learned-is problematic for networks in which every weight is available for change in every learning situation. This is potentially significant because atomistic learning appears to be commonplace in humans and most non-human animals. We briefly review various proposed fixes, concluding that the most promising strategy to date involves training on pseudo-patterns along with novel items, a form of learning that is not strictly atomistic, but which looks very much like it 'from the outside'.
The ProblemThe assumption that we have the ability to evaluate and add or delete beliefs individually is common in the psychological literature on memory, concept acquisition, and language acquisition (Ramsey, Stitch, & Garon, 1990;Cummins, Poirier, & Roth, 2004). Indeed, this supposition pervades our informal, commonsense framework for understanding the mind, as well as our formal-symbolic models of rationality and epistemology. Rationality, for example, is thought of as at least in part about the management of beliefs and other propositional attitudes: what beliefs we should and should not adopt, when and what to add, when and what to delete, etc. Models of rationality tell us when we ought to revise our individual beliefs, and because ought implies can, these models presuppose that we can manage our beliefs individually.
1At the same time, most of our current connectionist models of cognition suggest that the knowledge used in carrying out many cognitive tasks related to memory,