1973
DOI: 10.1016/0022-2496(73)90023-0
|View full text |Cite
|
Sign up to set email alerts
|

A mathematical model of learning under schedules of interresponse time reinforcement

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

1976
1976
2023
2023

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(2 citation statements)
references
References 27 publications
0
2
0
Order By: Relevance
“…For the sake of parsimony, we will describe the random interval (RI) and random differential reinforcement of low rates (RDRL). Our implementations of simple schedules are mainly based on initial work by Millenson (1963) and Ambler (1973). We consider their implementation ideal, because they are continuous versions of the discrete (and more widely used) algorithms (Fleshler & Hoffman, 1962).…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…For the sake of parsimony, we will describe the random interval (RI) and random differential reinforcement of low rates (RDRL). Our implementations of simple schedules are mainly based on initial work by Millenson (1963) and Ambler (1973). We consider their implementation ideal, because they are continuous versions of the discrete (and more widely used) algorithms (Fleshler & Hoffman, 1962).…”
Section: Methodsmentioning
confidence: 99%
“…In the well‐known DRL schedule (differential reinforcement of low rates of behavior), a minimum interresponse time (IRT) must precede rewarded responses (Ferster & Skinner, 1957). Using Beak, we were able to implement the random differential reinforcement of low rates—the RDRL schedule (Ambler, 1973; Logan, 1967). In a RDRL schedule, the required IRT varies randomly.…”
Section: Methodsmentioning
confidence: 99%