The literature relevant to the differential reinforcement of low rates of responding (DRL) is reviewed with respect to measurement of the behavior, bursts of responding, sequential dependencies, extinction and reconditioning, comparative aspects, punishment, reinforcement of two interresponse times, amount of deprivation and reinforcement, behavioral contrast, stimulus generalization, and response generalization. This review suggests that (a) bursts of responding could be due to a lack of stimulus feedback, (6) similar interresponse times tend to follow each other, (c) the development of mediating behavior is correlated with responding which is more appropriate to the schedule contingencies, and (d) subjects "preferred" short interresponse times. The shape of the stimulus generalization gradients after training on a DRL schedule is either peaked, flat, or inverted depending on the schedule value and prior training. Studies loosely concerned with response generalization suggest that responding under this schedule may be qualitatively different from responding under a variable-interval schedule. Experimental approaches for investigating the possible inhibitory and/or aversive properties of differential reinforcement are indicated. 225 1970 by the American Psychological Association, Inc.
Pigeons' preferences between two schedules of reinforcement were determined by a choice method. In each schedule-pair compared, one had a higher overall rate of reinforcement, but the other had a shorter delay before the first reinforcement. Delay before the first reinforcement was a strong determinant of schedule preference, but short delay could be offset by a large difference in rate of reinforcement. Problem Despite considerable investigation of the relation between performance and the scheduling of positive reinforcements, relatively few researches have investigated the "value" to S of particular schedules. In this context, value will be indexed by preferential choice between two stimuli, each associated with a reinforcement schedule. Given any set of schedules, how may we predict preference among them? The answer seems clear regarding fixed-criterion schedules varying within the same criterial dimension; e.g., shorter fixed ratios, intervals, or DRLs are undoubtedly preferred to longer fixed ratios, intervals, or DRLs, respectively. Within limits of discriminability, that schedule with the higher time-rate of reinforcement will be preferred. This principle also would probably predict accurately for choice between schedules specified by different criteria (e.g., fixed ratio versus fixed interval). The question posed here is whether this "rate of reinforcement" principle is sufficient to generate the preference ordering Ss exhibit when the schedules compared are themselves variable, (e.g., two VI's, two VR's, a VI versus a VR). On VI's, for example, the variance (or range) as well as the mean of the intervals between reinforcements probably influences the choice. This is indeed true in the limit, since when the timerates of reinforcement are equated, a VI schedule is preferred over an FI schedule (Herrnstein,1964b). Thus in addition to the overall rate of reinforcement, the time from the choice response to the first reinforcement may be an important determinant of preference. An experiment by Autor (1960), carried out with an arrangement comparable to that described below, investigated pairs of VI schedules generated from an arithmetic progression. His pigeons preferred that member of each pair having the higher overall rate of reinforcement. However, among such VI schedules, overall rate of reinforcement and mean time to the first reinforcement covary. By using a fixed sequence of intervals, the present study isolates the time to the first reinforcement from the average rate of reinforce
A psychophysical choice technique can be used to measure discrimination of the stimuli produced by two fixed-ratio schedules. As the difference between the two ratios is reduced, the number of errors in discrimination increases. The analysis differentiates between discrimination and response bias, which are frequently confused in animal psychophysics.
Pigeons were trained in delayed matching-to-sample with two postsample stimuli. A postsample R-cue signaled that a matching choice phase would follow. A postsample F-cue signaled that a matching choice phase would not follow. Previous research found reduced matching accuracy on F-cued probe trials when comparison stimuli were presented in the choice phase. The present four experiments systematically varied the events following an F-cue to determine the conditions under which the F-cue reduces delayed-matching accuracy. When F-cues and R-cues controlled different behavior, matching on probe trials was poor. When both cues controlled the same behavior, matching on probe trials was good. This result is best explained by the theory that comparison stimuli retrieve the sample representation, but only in the behavioral context established by the R-cue. The present research supports the view that response-produced stimuli serve a contextual role in animal short-term memory.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.