2022
DOI: 10.1007/s40732-022-00521-1
|View full text |Cite
|
Sign up to set email alerts
|

Rate Dependence and Token Reinforcement? A Preliminary Analysis

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 37 publications
0
3
0
Order By: Relevance
“…These tasks can be modeled with Rescorla–Wagner (RW)-RL models ( Recorla and Wagner, 1972 ), because the choices lead probabilistically, but immediately, to a primary reinforcer ( Bartolo and Averbeck, 2020 ; Beron et al, 2022 ). However, one can also use symbolic reinforcers, for example, tokens or money, to drive learning ( Jackson and Hackenberg, 1996 ; Kirsch et al, 2003 ; Seo and Lee, 2009 ; Delgado et al, 2011 ; Taswell et al, 2018 , 2021 , 2023 ; Falligant and Kranak, 2022 ; Yang et al, 2022 ). In these tasks, subjects learn to make choices to obtain tokens, which are predictive of primary reinforcers in the future.…”
Section: Introductionmentioning
confidence: 99%
“…These tasks can be modeled with Rescorla–Wagner (RW)-RL models ( Recorla and Wagner, 1972 ), because the choices lead probabilistically, but immediately, to a primary reinforcer ( Bartolo and Averbeck, 2020 ; Beron et al, 2022 ). However, one can also use symbolic reinforcers, for example, tokens or money, to drive learning ( Jackson and Hackenberg, 1996 ; Kirsch et al, 2003 ; Seo and Lee, 2009 ; Delgado et al, 2011 ; Taswell et al, 2018 , 2021 , 2023 ; Falligant and Kranak, 2022 ; Yang et al, 2022 ). In these tasks, subjects learn to make choices to obtain tokens, which are predictive of primary reinforcers in the future.…”
Section: Introductionmentioning
confidence: 99%
“…These tasks can be modeled with Rescorla-Wagner (RW) RL models (Recorla, 1972), because the choices lead probabilistically, but immediately, to a primary reinforcer (Bartolo and Averbeck, 2020; Beron et al, 2022). However, one can also use symbolic reinforcers, for example, tokens or money to drive learning (Jackson, 1996; Kirsch et al, 2003; Seo and Lee, 2009; Delgado, Jou and Phelps, 2011; Taswell et al, 2018; Taswell et al, 2021; Falligant and Kranak, 2022; Yang, Li and Stuphorn, 2022; Taswell et al, 2023). In these tasks, subjects learn to make choices to obtain tokens, which can be exchanged in the future for primary reinforcers.…”
Section: Introductionmentioning
confidence: 99%
“…The copyright holder for this preprint this version posted October 11, 2023. ; https://doi.org/10.1101/2023.10.11.561900 doi: bioRxiv preprint 2009; Delgado, Jou and Phelps, 2011;Taswell et al, 2018;Taswell et al, 2021;Falligant and Kranak, 2022;Yang, Li and Stuphorn, 2022;Taswell et al, 2023). In these tasks, subjects learn to make choices to obtain tokens, which can be exchanged in the future for primary reinforcers.…”
Section: Introductionmentioning
confidence: 99%