Most people envision themselves as operant agents endowed with the capacity to bring about changes in the outside world. This ability to monitor one's own causal power has long been suggested to rest upon a specific model of causal inference, i.e., a model of how our actions causally relate to their consequences. What this model is and how it may explain departures from optimal inference, e.g., illusory control and self-attribution biases, are still conjecture. To address this question, we designed a series of novel experiments requiring participants to continuously monitor their causal influence over the task environment by discriminating changes that were caused by their own actions from changes that were not. Comparing different models of choice, we found that participants' behaviour was best explained by a model deriving the consequences of the forgone action from the current action that was taken and assuming relative divergence between both. Importantly, this model agrees with the intuitive way of construing causal power as "difference-making" in which causally efficacious actions are actions that make a difference to the world. We suggest that our model outperformed all competitors because it closely mirrors people's belief in their causal power -a belief that is well-suited to learning action-outcome associations in controllable environments. We speculate that this belief may be part of the reason why reflecting upon one's own causal power fundamentally differs from reasoning about external causes.1 The need to be and feel in control is so strong that individuals do whatever they can to re-establish control when it disappears or is taken away (Brehm, 1966;Brehm and Brehm, 1981). Reestablishment of lost agency can take different forms from illusory pattern perception to erroneous identification of a causal relationship between random or unrelated stimuli. Thus, people experiencing a loss of control are more likely to see images in noise, to form illusory correlations, to perceive conspiracies or to develop superstitions (Whitson & Galinsky, 2008). Such erroneous causal attributions would help restore 4 Here we draw upon a classical distinction between associative and generative approaches to causation, according to which causes are "associated" with effects by retrospection or actively "generate" their effects through an operant mechanism (e.g., Cheng, 1997). Strictly speaking, however, associative models in the form of reinforcement-learning (RL) algorithms do also possess a generative model of the world (i.e., an explanation for how observations are generated), whereas counterfactual emulation is a generative mechanism per se (i.e., a mechanism to decide which among several candidate causes has generated the effect). Here, and in what follows, we take the "generative" term in a broader and more liberal sense: generative models are those models drawing on an explicit representation of the generative source (usually in the form of a probability distribution over action outcomes), which can be used to make ...