Despite what the somewhat technical name might suggest, multiple cue probability learning (MCPL) problems are commonly encountered in daily life. For instance, we may have to judge whether it will rain from cues such as temperature, humidity, and the time of year. Or, we may have to judge whether someone is telling the truth from cues such as pitch of voice, level of eye contact, and rate of eye blinks. While informative, these cues are not perfect predictors. How do we learn to solve such problems? How do we learn which cues are relevant, and to what extent? How do we integrate the available information into a judgement?Applying a rational analysis (Anderson, 1990; Oaksford & Chater, 1998), we would answer these questions by specifying a rational model, and then compare individuals' judgements to the model-predicted judgements. Insofar as observed behaviour matches the predicted behaviour, we would conclude that people learn these tasks as rational agents. Here, we take a slightly different approach. We still use rational models, but rather than comparing predicted behaviour to actual behaviour (a comparison in what we might call observation space), we make the comparison in parameter space.To be a little less obscure, let's take an agent who must repeatedly predict share price from past share price. A rational agent would make predictions which are optimal given an observed pattern of past share price. The question is whether a real (human) agent makes predictions as the rational agent would. Assume current share price Y t is related to past share price Y t −1 as Y t = β Y t −1 , where β is an unknown constant. In order to make accurate predictions, the agent must infer the value of β from repeated observations of share price. A rational agent, with optimal estimates w t , will make predictionsŷ t = w t Y t −1 . Since the relation is rather simple, and the agent is rational, the inferences w t (and hence predictionsŷ t ) will be accurate quite quickly. Enter the real (rational?) agent, making predictions R t . Rather than assuming these predictions follow from rational estimates w t , we assume they are based on R t = u t Y t −1 , where u t is a coefficient not necessarily equal to w t . Thus, we assume the same structural model for rational and actual predictions, but allow for different parameters u t and w t . By comparing u t to w t , we can see how the real agents' learning compares to the rational agents' learning.