Learning in a changing and uncertain environment is a difficult problem. A popular solution is to predict future observations and then use surprising outcomes to update those predictions. However, humans also have a sense of confidence that characterizes the precision of their predictions.Bayesian models use this confidence to regulate learning: for a given surprise, the update is smaller when confidence is higher. We explored the human brain dynamics sub-tending such a confidenceweighting using magneto-encephalography. During our volatile probability learning task, subjects' confidence reports conformed with Bayesian inference. Several stimulus-evoked brain responses reflected surprise, and some of them were indeed further modulated by confidence. Confidence about predictions also modulated pupil-linked arousal and beta-range (15-30 Hz) oscillations, which in turn modulated specific stimulus-evoked surprise responses. Our results suggest thus that confidence about predictions modulates intrinsic properties of the brain state to amplify or dampen surprise responses evoked by discrepant observations. Meyniel et al., 2015b), the weight of evidence (Rohe et al., 2019), the precision of predictions (Iglesias et al., 2013;Mathys et al., 2014;Vossel et al., 2014) discussed at the end of this article.Here, we propose to use optimal Bayesian models as a benchmark to formalize, at a computational level, the learning process. In particular, we formalize the notion of discrepancy between