Two experiments examined the relation between response variability and sensitivity to changes in reinforcement contingencies. In Experiment 1, two groups of college students were provided complete instructions regarding a button-pressing task; the instructions stated "press the button 40 times for each point" (exchangeable for money). Two additional groups received incomplete instructions that omitted the pattern of responding required for reinforcement under the same schedule. Sensitivity was tested in one completely instructed and one incompletely instructed group after responding had met a stability criterion, and for the remaining two groups after a short exposure to the original schedule. The three groups of subjects whose responding was completely instructed or who had met the stability criterion showed little variability at the moment of change in the reinforcement schedule. The responding of these three groups also was insensitive to the contingency change. Incompletely instructed short-exposure responding was more variable at the moment of schedule change and was sensitive to the new contingency in four of six cases. In Experiment 2, completely and incompletely instructed responding first met a stability criterion. This was followed by a test that showed no sensitivity to a contingency change. A strategic instruction was then presented that stated variable responding would work best. Five of 6 subjects showed increased variability after this instruction, and all 6 showed sensitivity to contingency change. The findings are discussed from a selectionist perspective that describes response acquisition as a process of variation, selection, and maintenance. From this perspective, sensitivity to contingency changes is described as a function of variables that produce response variability.
College students were instructed to press a button for points under a single reinforcement schedule or under a variety of reinforcement schedules. Instructions for a single schedule were either specific or minimal. Instructions on a variety of schedules involved specific instructions on eight different schedules of reinforcement. Subsequent to the varied training, responding under a fixed-interval schedule occurred at a low rate. Both the minimal and specific instruction training led to fixed-interval responding that was similar to the responding exhibited during training. These findings suggest that under certain conditions instructed behavior is sensitive to changes in contingencies.Key words: variety of training, instructed behavior, contingency-shaped behavior, response history, efficiency of responding, reinforcement schedules, button press, adult humans Theoretical and empirical developments in behavior analysis have suggested that behavior acquired by following an instruction may be less sensitive to changes in prevailing contingencies than behavior acquired by shaping (Baron & Galizio, 1983;Baron, Kaufman, & Stauber, 1969;Galizio, 1979;Harzem, Lowe, & Bagshaw, 1978;Matthews, Shimoff, Catania, & Sagvolden, 1977;Skinner, 1966Skinner, ,1969Vaughan, 1985). For example, subjects who have been instructed to respond under one schedule of reinforcement continue to respond as instructed even when the schedule of reinforcement has changed (Baron et al., 1969;Harzem et al., 1978). Insensitivity to changing contingencies is less likely to occur when behavior is shaped by successive approximations, or when instructions are used that do not describe the specific schedules (i.e., minimal instructions; Matthews et al., 1977;Shimoff, Catania, & Matthews, 1981).These findings give rise to the question of which variables determine the sensitivity of human behavior to various and varying contingencies. Weiner (1969Weiner ( , 1970a specific histories of responding under schedules of reinforcement were necessary to bring about sensitivity to fixed-interval (FI) schedules. Training under a differential-reinforcement-of-low-rate (DRL) schedule was sufficient to bring about sensitive performance under FI schedules even when the subjects had a history of responding at high rates under fixed-ratio (FR) schedules. When either no response history or a high-rate response history was provided, high rates occurred under the FI schedule. Weiner's studies, however, did not examine the interaction of reinforcement histories and instructions.Galizio (1979) suggested that under instructed conditions sensitivity occurs only when behavior comes into contact with the change in contingencies. When avoidance behavior was instructed under a point-loss procedure and a schedule was then introduced in which the loss contingency was no longer in effect, behavior did not change. However, when continued responding as instructed resulted in a loss of points, performance quickly adjusted to these conditions. Under the first condition, responses did not...
Human operant behavior is often said to be controlled by different variables or governed by different processes than nonhuman operant behavior. Support for this claim within the operant literature comes from data suggesting that human behavior is often insensitive to schedules of reinforcement to which nonhuman behavior has been sensitive. The data that evoke the use of the terms sensitivity and insensitivity, however, result from both between-species and within-subject comparisons. We argue that because sensitivity is synonymous with experimental control, conclusions about sensitivity are best demonstrated through within-subject comparisons. Further, we argue that even when sensitivity is assessed using within-subject comparisons of performance on different schedules of reinforcement, procedural differences between studies of different species may affect schedule performance in important ways. We extend this argument to age differences as well. We conclude that differences across populations are an occasion for more precise experimental analyses and that it is premature to conclude that human behavior is controlled by different processes than nonhuman behavior.
A repeated acquisition design was used to study the effects of instructions and differential reinforcement on the performance of complex chains by undergraduates. The chains required responding on a series of keys that corresponded to characters that appeared on a monitor. Each day, subjects performed a new chain in a learning session and later relearned the same chain in a test session. Experiment 1 replicated previous research by showing that instructional stimuli paired with the correct responses in the learning sessions, combined with differential reinforcement in both learning and test sessions, resulted in stimulus control by the characters in each link. Experiment 2 separated the effects of instructional stimuli and differential reinforcement, and showed that stimulus control by the characters could be established solely by differential reinforcement during the test sessions. Experiment 3 showed that when a rule specified the relation between learning and test sessions, some subjects performed accurately in the test sessions without exposure to any differential consequences. This rule apparently altered the stimulus control properties of the characters much as did differential reinforcement during testing. However, compared to differential reinforcement, the rule established stimulus control more quickly.
Current practices in the undergraduate Psychology of Learning course were assessed through a survey in which a questionnaire probing the teaching of the course was sent to 238 4-year colleges and universities in the United States. Fifty-four percent of the questionnaires were returned. Learning courses were taught at all but 10 of the schools that responded. The course typically is one of several that can be selected to fulfill requirements for the major in psychology. The course orientation and content varied widely from cognitive to eclectic to behavioral, and laboratory requirements existed in less than half of the courses. The effects of these practices on behavior analysis are considered and several suggestions are made for teaching behavior analysis in the Learning course and elsewhere to undergraduates.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.