In three experiments, human subjects were trained on a five-component multiple schedule with different fixed intervals of monetary reinforcement scheduled in the different components. Subjects uninstructed about the fixed-interval schedules manifested high and generally equivalent rates regardless of the particular component. By comparison, subjects given instructions about the schedules showed orderly progressions of rates and temporal patterning as a function of the interreinforcement intervals, particularly when feedback about reinforcement was delivered but also when reinforcement-feedback was withheld. Administration of the instructions-reinforcement combination to subjects who had already developed poorly differentiated behavior, however, did not make their behavior substantially better differentiated. When cost was imposed for responding, both instructed and uninstructed subjects showed low and differentiated rates regardless of their prior histories. It was concluded that instructions can have major influences on the establishment and maintenance of human operant behavior.
Michael (1975) reviewed efforts to classify reinforcing events in terms of whether stimuli are added (positive reinforcement) or removed (negative reinforcement). He concluded that distinctions in these terms are confusing and ambiguous. Of necessity, adding a stimulus requires its previous absence and removing a stimulus its previous presence. Moreover, there is no good basis, either behavioral or physiological, that indicates the involvement of distinctly different processes, and on these grounds he proposed that the distinction be abandoned. Despite the cogency of Michael's analysis, the distinction between positive and negative reinforcement is still being taught. In this paper, we reconsider the issue from the perspective of 30 years. However, we could not find new evidence in contemporary research and theory that allows reliable classification of an event as a positive rather than a negative reinforcer. We conclude by reiterating Michael's admonitions about the conceptual confusion created by such a distinction.
Young men pulled a plunger on mixed and multiple schedules in which periods of variable-interval monetary reinforcement alternated irregularly with periods of extinction (Experiment 1), or in which reinforcement was contingent on different degrees of effort in the two alternating components (Experiment 2). In the baseline conditions, the pair of stimuli correlated with the schedule components could be obtained intermittently by pressing either of two observing keys. In the main conditions, pressing one of the keys continued to produce both discriminative stimuli as appropriate. Pressing the other key produced only the stimulus correlated with variable-interval reinforcement or reduced effort; presses on this key were ineffective during periods of extinction or increased effort. In both experiments, key presses producing both stimuli occurred at higher rates than key presses producing only one, demonstrating enhancement of observing behavior by a stimulus correlated with the less favorable of two contingencies. A control experiment showed that stimulus change alone was not an important factor in the maintenance of the behavior. These findings suggest that negative as well as positive stimuli may play a role in the conditioned reinforcement of human behavior.
Rats responded under progressive-ratio schedules for sweetened milk reinforcers; each session ended when responding ceased for 10 min. Experiment 1 varied the concentration of milk and the duration of postreinforcement timeouts. Postreinforcement pausing increased as a positively accelerated function of the size of the ratio, and the rate of increase was reduced as a function of concentration and by timeouts of 10 s or longer. Experiment 2 varied reinforcement magnitude within sessions (number of dipper operations per reinforcer) in conjunction with stimuli correlated with the upcoming magnitude. In the absence of discriminative stimuli, pausing was longer following a large reinforcer than following a small one. Pauses were reduced by a stimulus signaling a large upcoming reinforcer, particularly at the highest ratios, and the animals tended to quit responding when the past reinforcer was large and the stimulus signaled that the next one would be small. Results of both experiments revealed parallels between responding under progressive-ratio schedules and other schedules containing ratio contingencies. Relationships between pausing and magnitude suggest that ratio pausing is under the joint control of inhibitory properties of the past reinforcer and excitatory properties of stimuli correlated with the upcoming reinforcer, rather than under the exclusive control of either factor alone.
We investigated the possibility that human-like fixed-interval performances would appear in rats given a variable-ratio history (Wanchisen, Tatham, & Mooney, 1989). Nine rats were trained under single or compound variable-ratio schedules and then under a fixed-interval 30-s schedule. The histories produced high fixed-interval rates that declined slowly over 90 sessions; differences as a function of the particular history were absent. Nine control animals given only fixed-interval training responded at lower levels initially, but rates increased with training. Despite differences in absolute rates, rates within the intervals and postreinforcement pauses indicated equivalent development of the accelerated response patterns suggestive of sensitivity to fixed-interval contingencies. The finding that the histories elevated rates without retarding development of differentiated patterns suggests that the effective response unit was a burst of several lever presses and that the fixed-interval contingencies acted on these units in the same way as for single responses. Regardless of history, the rats did not manifest the persistent, undifferentiated responding reported for humans under comparable schedules. We concluded that the shortcomings of animal models of human fixed-interval performances cannot be easily remedied by including a variable-ratio conditioning history within the model.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.