2020
DOI: 10.1371/journal.pcbi.1007963
|View full text |Cite
|
Sign up to set email alerts
|

A simple model for learning in volatile environments

Abstract: Sound principles of statistical inference dictate that uncertainty shapes learning. In this work, we revisit the question of learning in volatile environments, in which both the first and second-order statistics of observations dynamically evolve over time. We propose a new model, the volatile Kalman filter (VKF), which is based on a tractable state-space model of uncertainty and extends the Kalman filter algorithm to volatile environments. The proposed model is algorithmically simple and encompasses the Kalma… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

6
115
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
6
1

Relationship

1
6

Authors

Journals

citations
Cited by 66 publications
(121 citation statements)
references
References 48 publications
6
115
0
Order By: Relevance
“…Therefore, a more complicated model, one that captures higher-level beliefs about contingency transitions or learning when to learn, seems most appropriate, and indeed, that type of model was able to simulate the key features of our data ( Palminteri et al, 2017 ). Future work will compare and contrast different potential computational models included, but not limited to Bayesian Hidden State Markov Models ( Hampton et al, 2006 ), as well as switching ( Gershman et al, 2014 ) and volatile Kalman Filters ( Piray and Daw, 2020 ).…”
Section: Resultsmentioning
confidence: 99%
“…Therefore, a more complicated model, one that captures higher-level beliefs about contingency transitions or learning when to learn, seems most appropriate, and indeed, that type of model was able to simulate the key features of our data ( Palminteri et al, 2017 ). Future work will compare and contrast different potential computational models included, but not limited to Bayesian Hidden State Markov Models ( Hampton et al, 2006 ), as well as switching ( Gershman et al, 2014 ) and volatile Kalman Filters ( Piray and Daw, 2020 ).…”
Section: Resultsmentioning
confidence: 99%
“…But the very premise of these experiments violates the simplifying assumption of the Kalman filter -that volatility is fixed and known to the agent. To handle this situation, new models were developed (Mathys et al, 2011;Piray and Daw, 2020) that generalize the Kalman filter to incorporate learning the volatility " as well, arising from Bayesian inference in a hierarchical generative model in which the true " is also changing. In this case, exact inference is no longer tractable, but approximate inference is possible and typically incorporates Eqs.…”
Section: Modelmentioning
confidence: 99%
“…We developed a probabilistic model for learning under these circumstances. The data generation process arises from a further hierarchical generalization of these models (especially the generative model used in our recent work (Piray and Daw, 2020)), in which the true value of unpredictability " is unknown and changing, as are the true reward rate and volatility ( Figure 1d). The goal of the learner is to estimate the true reward rate from observations, which necessitates inferring volatility and unpredictability as well.…”
Section: Modelmentioning
confidence: 99%
See 2 more Smart Citations