Predictive coding theories propose that the brain constantly updates its internal models of the world to minimize prediction errors and optimize sensory processing. However, the neural mechanisms that link the encoding of prediction errors and optimization of sensory representations remain unclear. Here, we provide direct evidence how predictive learning shapes the representational geometry of the human brain. We recorded magnetoencephalography (MEG) in human participants listening to acoustic sequences with different levels of regularity. Representational similarity analysis revealed how, through learning, the brain aligned its representational geometry to match the statistical structure of the sensory inputs, by clustering the representations of temporally contiguous and predictable stimuli. Crucially, we found that in sensory areas the magnitude of the representational shift correlated with the encoding strength of prediction errors. Furthermore, using partial information decomposition we found that, prediction errors were processed by a synergistic network of high-level associative and sensory areas. Importantly, the strength of synergistic encoding of precition errors predicted the magnitude of representational alignment during learning. Our findings provide evidence that large-scale neural interactions engaged in predictive processing modulate the representational content of sensory areas, which may enhance the efficiency of perceptual processing in response to the statistical regularities of the environment.