2010
DOI: 10.3758/pbr.17.4.443
|View full text |Cite
|
Sign up to set email alerts
|

Exemplar models as a mechanism for performing Bayesian inference

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

3
130
0

Year Published

2010
2010
2017
2017

Publication Types

Select...
5
2
1

Relationship

2
6

Authors

Journals

citations
Cited by 114 publications
(133 citation statements)
references
References 44 publications
3
130
0
Order By: Relevance
“…One such model is Lacerda's (1995) model of categorical effects as emergent from exemplar-based categorization, which has a direct mathematical link to our model, as described by Feldman et al (2009). Shi et al (2010) also provide an implemention of our model in an exemplar-based framework, which is closely related to the neural model proposed by Guenther and Gjaja (1996). In that model, there are more exemplars, or a higher level of neural firing, at category centers than near category boundaries.…”
Section: Ockham's Razor and Levels Of Analysismentioning
confidence: 99%
“…One such model is Lacerda's (1995) model of categorical effects as emergent from exemplar-based categorization, which has a direct mathematical link to our model, as described by Feldman et al (2009). Shi et al (2010) also provide an implemention of our model in an exemplar-based framework, which is closely related to the neural model proposed by Guenther and Gjaja (1996). In that model, there are more exemplars, or a higher level of neural firing, at category centers than near category boundaries.…”
Section: Ockham's Razor and Levels Of Analysismentioning
confidence: 99%
“…These kinds of models are sometimes called rational process models, since they are models of rational learners that are concerned with implementing the process of approximating Bayesian inference. For example, Shi, Griffiths, Feldman, & Sanborn (2010) discuss how exemplar models may provide a possible mechanism for implementing Bayesian inference, since these models allow an approximation process called importance sampling. Other examples include the work of Bonawitz et al (2011), who discuss how a simple sequential algorithm can be used to approximate Bayesian inference in a basic causal learning task, and that of Pearl, Goldwater, and Steyvers (2011), who (as described in section 3) investigated various online algorithms for Bayesian models of word segmentation.…”
Section: Algorithmsmentioning
confidence: 99%
“…The computationally-justified message passing scheme we developed, FBL, uses the same class of approximations as Daw et al (2008) as categorization Shi et al, 2010), sentence parsing (Levy et al, 2009), prediction (Brown & Steyvers, 2009), perceptual bistability (Gershman et al, 2012, and even human and animal learning (Lu et al, 2008;Rojas, 2010) to explain trial order effects. Sampling algorithms tend to come with asymptotic guarantees: with enough samples any computation done with these algorithms will be indistinguishable from computation done with the full probability distribution.…”
Section: Kinds Of Approximationsmentioning
confidence: 99%
“…A classic way to combine computational-and algorithmic-level insights is to begin with an algorithmic-level model developed to fit human behavior and then investigate its computational-level properties (Ashby & Alfonso-Reese, 1995;Gigerenzer & Todd, 1999). This is not the only possible direction, and recently researchers have begun at the computational level of analysis and then worked toward understanding the algorithm (Griffiths et al, 2012;Sanborn et al, 2010;Shi et al, 2010).…”
mentioning
confidence: 99%
See 1 more Smart Citation