Rats were trained and matched on a delayed-nonmatching-to-sample (DNMTS) task and randomly assigned to treatment. In Experiment 1, radio-frequency (RF) lesions were aimed at lateral portions of the internal medullary lamina (L-IML), midline thalamus (MT), mammillary bodies (MB), and the combination of MT and MB. In Experiment 2, RF lesions were aimed at the fornix. After recovery, DNMTS was retrained at retention intervals retention interval of 3.0-18.0 s, the critical retention interval for 75% DNMTS accuracy was determined by a staircase procedure, and spontaneous exploration was observed in an open field. L-IML lesions produced significant deficits on DNMTS and exploratory behavior that were comparable to deficits on the same tasks in rats recovered from pyrithiamine-induced thiamine deficiency. Fornix lesions produced significant DNMTS deficits that were substantially smaller than for the L-IML group. The MT, MB, and MT+MB treatments had no significant effect on DNMTS.
We attempted to determine whether timing theories developed primarily to explain performance in fixed-interval reinforcement schedules are also applicable to variable intervals. Groups of rats were trained in lever boxes on peak procedures with a 30-, 45-, or 60-s interval, or a 30- to 60-s uniform distribution (Experiment 1); a 60-s fixed and 1- to 121-s uniform distribution between and within animals (Experiment 2); and a procedure in which the interval between food and next available food gradually changed from a fixed 60 s to a uniform distribution between 0 and 120 s (Experiment 3). In uniform interval schedules rats made lever responses at particular times since food, as measured by the distribution of food-food intervals, the distribution of postreinforcement pauses, and the mean response rate as a function of time since food. Qualitative features of this performance are described by a multiple-oscillator connectionist theory of timing.
Sequential priming refers to speeded visual search when target identity or location is repeated within a trial sequence. In two experiments with pigeons, we addressed the relative contributions of stimulus-driven factors and learned expectancies to this effect. Pigeons pecked at targets during trialwise presentations of visual-search displays. Random-sequence conditions minimized the role of expectancy by introducing same-target or same-location trial sequences unpredictably. Blocked-sequence conditions added predictability by regular repetition of target and/or location over trials. Intertrial interval varied from 0.5 to 3 sec. The findings revealed significant reductions in reaction time during predictable target or location sequences compared with unpredictable repetitions within random contexts. Stimulus-driven factors do not seem to have an important role in many instances of sequential priming. Expectancy-based priming of target and location followed similar patterns.
The problem was to determine how rats adjust the times of their lever responses to repeating sequences of interfood intervals. In Experiment 1, rats were trained on an interval schedule of reinforcement with a 12-element Fleshler-Hoffman series with a mean of 60 sec; the order was as follows: ascending, random with repetition, random with replacement, random without replacement. In Experiment 2, rats were trained with a lO-elementascending or descending series (from 20to 2 9 sec), and in a ramp procedure in which these intervals increased and then decreased repeatedly. In the ascending, descending, and ramp conditions (but not in the random conditions), postreinforcement pause (PRP) was a function of the interval. PRP was most highly correlated with an interval later in the series. Theories of conditioning and timing based on the averaging of past experience must be modified to account for such anticipatory behavior.Animals learn to adjust their behavior on the basis of past experiences. Theories of learning describe the way that these experiences are aggregated, normally by some type of averaging mechanism. In stochastic models of learning, current performance is determined jointly by the most recent example and all previous examples (Bush & Mosteller, 1955;Rescorla& Wagner, 1972). Normally, the weight given to the recent example is small, so that the adjustments are gradual; in some cases, the weight given to the recent example is large, so that adjustments are rapid. But the linear averaging of stochastic models oflearning does not provide a way for animals to predict the future on the basis of trends.In an unchanging environment, one cannot determine whether animals are affected by the long-term past, the recent past, or the pattern of events. If animals used a weighted average of previous intervals with a small or large weight for recent intervals, or if they used the past pattern of intervals to predict the future intervals, they would have the same behavior in a constant environment. To determine whether or not animals are responsive to patterns as well as averages, it is necessary to present sequences of examples that are not all the same.In a series of experiments on the effect of changes in the magnitude of reinforcement on running speed of rats in a straight alley, rats have been shown to be sensitive to the pattern of the changes in amount, and not simply the short-term or long-term average amount (Fountain & Hulse,
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.