Sound principles of statistical inference dictate that uncertainty shapes learning. In this work, we revisit the question of learning in volatile environments, in which both the first and second-order statistics of environments dynamically evolve over time. We propose a new model, the volatile Kalman filter (VKF), which is based on a tractable state-space model of uncertainty and extends the Kalman filter algorithm to volatile environments. Algorithmically, the proposed model is simpler and more transparent than existing models, and encompasses the Kalman filter as a special case. Specifically, in addition to the errorcorrecting rule of Kalman filter for learning observations, the VKF learns volatility according to a second error-correcting rule. These dual updates echo and contextualize classical psychological models of learning, in particular hybrid accounts of Pearce-Hall and Rescorla-Wagner. At the computational level, compared with existing models, the VKF is more accurate, particularly in estimating volatility, as it is based on more faithful approximations to the exact inference. Accordingly, when fit to empirical data, the VKF is better behaved than alternatives and better captures human choice data in a probabilistic learning task.The proposed model provides a transparent and coherent account of learning in stable or volatile environments and has implications for decision neuroscience research.