1996
DOI: 10.1006/jmps.1996.0033
|View full text |Cite
|
Sign up to set email alerts
|

Maximum Entropy Inference and Stimulus Generalization

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
12
0

Year Published

2001
2001
2012
2012

Publication Types

Select...
7
1

Relationship

1
7

Authors

Journals

citations
Cited by 15 publications
(12 citation statements)
references
References 12 publications
0
12
0
Order By: Relevance
“…Griffiths and Tenenbaum (2006) studied people's predictions about a variety of everyday events, including the grosses of movies and the time to bake a cake, and found that these predictions corresponded strikingly well with the actual distributions of these quantities. In each case, people were asked to predict the total extent or duration of a quantity on the basis of its current value, such as how much money a movie would make on the basis of how much it has made so far or how long a cake would be in the oven on the basis of how long it has currently been in the He then gave a probabilistic explanation for this phenomenon that was later formulated in a Bayesian framework (Myung & Shepard, 1996;Tenenbaum & Griffiths, 2001). Here, we use the notation originally introduced by Shepard (1987).…”
Section: Simulation 3 Predicting the Futurementioning
confidence: 99%
“…Griffiths and Tenenbaum (2006) studied people's predictions about a variety of everyday events, including the grosses of movies and the time to bake a cake, and found that these predictions corresponded strikingly well with the actual distributions of these quantities. In each case, people were asked to predict the total extent or duration of a quantity on the basis of its current value, such as how much money a movie would make on the basis of how much it has made so far or how long a cake would be in the oven on the basis of how long it has currently been in the He then gave a probabilistic explanation for this phenomenon that was later formulated in a Bayesian framework (Myung & Shepard, 1996;Tenenbaum & Griffiths, 2001). Here, we use the notation originally introduced by Shepard (1987).…”
Section: Simulation 3 Predicting the Futurementioning
confidence: 99%
“…Similarly, the ALCOVE model of category learning (Kruschke, 1992) makes categorization decisions by potentially considering the weighted sum of evidence for each category alternative provided by every stimulus in a domain, and the same is true of the closely theoretically related context model (Medin & Schaffer, 1978) and generalized context model (Nosofsky, 1984). There are also various Bayesian cognitive models, including accounts of generalization (Myung & Shepard, 1996;Shepard, 1987) and concept learning (Tenenbaum, 1999;Tenenbaum & Griffiths, 2001), that integrate across prior-weighted probability densities to determine response probabilities and, so, strive for substantive rationality in a very direct way. Finally, there are substantively rational psychological models-most notably, Anderson's (1990Anderson's ( , 1991Anderson's ( , 1992) rational modelthat introduce time and memory constraints into the criteria for decision making but continue to allow for the weighting and combination of all of the relevant available evidence to optimize decisions under these criteria.…”
Section: The Rational Approachmentioning
confidence: 99%
“…In the way that I indicated earlier, the shapes of the equalgeneralization contours produced by the Bayesian integration implicate a spatial metric that locally is approximately Euclidean if these extensions have been highly correlated. But the shapes of the equal-generalization contours produced by this integration implicate a metric that deviates from the Euclidean (in the manner of a Minkowski r-metric with r < 2) to the extent that these extensions have been uncorrelated (Myung & Shepard, 1996;Shepard, 1987Shepard, , 1991. (The same theory provides an explanation for a striking difference in classification learning that depends on whether the dimensions of the stimuli are integral or separable-as shown in the experiments of Garner, 1974;Nosofsky, 1986;Shepard & Chang, 1963;Shepard, Hovland, & Jenkins, 1961.…”
Section: Law Ofmusical Transformationmentioning
confidence: 93%
“…If, however, the extensions of the consequential regions along different dimensions are assumed to be uncorrelated, the slope of the computed exponential fall-off varies in a different way with directions in the space. This case, which seems more appropriate for separable dimensions such as those of color and shape (Attneave, 1950) or of size and orientation (Shepard,I964a), yields equal-generalization contours that approximate the diamond shape corresponding to the so-called "city-block" metric implicated by Attneave (see Myung & Shepard, 1996;Shepard, 1987Shepard, , 1991. The fundamentally Bayesian approach to generalization briefly described here is being extended to the formulation ofadditional fundamental laws ofinductive inference by Joshua Tenenbaum and Thomas Griffiths (200 Ia,200 Ib).…”
Section: Investigations Of Generalizationmentioning
confidence: 99%