Reference point approaches have dominated the study of categorization for decades by explaining classification learning in terms of similarity to stored exemplars or averages of exemplars. The most successful reference point models are firmly grounded in the associative learning tradition-treating categorization as a stimulus generalization process based on inverse exponential distance in psychological space augmented by a dimensional selective attention mechanism. We present experiments that pose a significant challenge to popular reference point accounts which explain categorization in terms of stimulus generalization from exemplars, prototypes, or adaptive clusters. DIVA, a similarity-based alternative to the reference point framework, provides a successful account of the human data. These findings suggest that a successful psychology of categorization may need to look beyond stimulus generalization and toward a view of category learning as the induction of a richer model of the data.
While the ability to acquire non-linearly separable (NLS) classifications is well documented in the study of human category learning, the relative ease of learning compared to a linear separable structure is difficult to evaluate without potential confounds.
Since the work of Minsky and Papert ( 1969 ), it has been understood that single-layer neural networks cannot solve nonlinearly separable classifications (i.e., XOR). We describe and test a novel divergent autoassociative architecture capable of solving nonlinearly separable classifications with a single layer of weights. The proposed network consists of class-specific linear autoassociators. The power of the model comes from treating classification problems as within-class feature prediction rather than directly optimizing a discriminant function. We show unprecedented learning capabilities for a simple, single-layer network (i.e., solving XOR) and demonstrate that the famous limitation in acquiring nonlinearly separable problems is not just about the need for a hidden layer; it is about the choice between directly predicting classes or learning to classify indirectly by predicting features.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations鈥揷itations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.