Exploration is vital for animals and artificial agents who face uncertainty about their environments due to initial ignorance or subsequent changes. Their choices need to balance exploitation of the knowledge already acquired, with exploration to resolve uncertainty. However, the exact algorithmic structure of exploratory choices in the brain still remains largely elusive. A venerable idea in reinforcement learning is that agents can plan appropriate exploratory choices offline, during the equivalent of quiet wakefulness or sleep. Although offline processing in humans and other animals, in the form of hippocampal replay and preplay, has recently been the subject of highly successful modelling, existing methods only apply to known environments. Thus, they cannot predict exploratory replay choices during learning and/or behaviour in dynamic environments. Here, we extend the theory of Mattar & Daw to examine the potential role of replay in approximately optimal exploration, deriving testable predictions for the patterns of exploratory replay choices in a paradigmatic spatial navigation task. Our modelling provides a normative interpretation of the available experimental data suggestive of exploratory replay. Furthermore, we highlight the importance of sequence replay, and license a range of new experimental paradigms that should further our understanding of offline processing.