Models of categorization make different representational assumptions, with categories being represented by prototypes, sets of exemplars, and everything in between. Rational models of categorization justify these representational assumptions in terms of different schemes for estimating probability distributions. However, they do not answer the question of which scheme should be used in representing a given category. We show that existing rational models of categorization are special cases of a statistical model called the hierarchical Dirichlet process, which can be used to automatically infer a representation of the appropriate complexity for a given category.
Categorization as nonparametric Bayesian density estimation 2 Categorization as nonparametric Bayesian density estimationRational models of cognition aim to explain the structure of human thought and behavior as an optimal solution to the computational problems that are posed by our environment (Anderson, 1990;Chater & Oaksford, 1999;Marr, 1982; Oaksford & Chater, 1998).Rational models have been developed for several aspects of cognition, including memory (Anderson, 1990;Shiffrin & Steyvers, 1997), reasoning (Oaksford & Chater, 1994, generalization (Shepard, 1987; Tenenbaum & Griffiths, 2001), and causal induction (Anderson, 1990;Griffiths & Tenenbaum, 2005). By examining the computational problems that underlie our cognitive capacities, it is often possible to gain a deeper understanding of the assumptions behind successful models of human cognition, and to discover new classes of models that might otherwise have been overlooked.In this chapter, we pursue a rational analysis of category learning: inferring the structure of categories from a set of stimuli labeled as belonging to those categories. The knowledge acquired through this process can ultimately be used to make decisions about how to categorize new stimuli. Several rational analyses of category learning have been proposed (Anderson, 1990;Nosofsky, 1998;Ashby & Alfonso-Reese, 1995). These analyses essentially agree on the nature of the computational problem involved, casting category learning as a problem of density estimation: determining the probability distributions associated with different category labels. Viewing category learning in this way helps to clarify the assumptions behind the two main classes of psychological models: exemplar models and prototype models. Exemplar models assume that a category is represented by a set of stored exemplars, and categorizing new stimuli involves comparing these stimuli to the set of exemplars in each category (e.g., Medin & Schaffer, 1978;Nosofsky, 1986).Prototype models assume that a category is associated with a single prototype and categorization involves comparing new stimuli to these prototypes (e.g., Reed, 1972).
Abstract. We describe a new, simplified, and general analysis of a fusion of Nesterov's accelerated gradient with parallel coordinate descent. The resulting algorithm, which we call BOOM, for boosting with momentum, enjoys the merits of both techniques. Namely, BOOM retains the momentum and convergence properties of the accelerated gradient method while taking into account the curvature of the objective function. We describe an distributed implementation of BOOM which is suitable for massive high dimensional datasets. We show experimentally that BOOM is especially effective in large scale learning problems with rare yet informative features.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.