The brain's computations for active and passive self-motion estimation can be unified with a single model that optimally combines vestibular and visual signals with sensory predictions based on motor efference copies. It is unknown whether this theoretical framework also applies to the integration of artificial motor signals, like the motor signals that occur when driving a car. Here, we examined if training humans to control a self-motion platform would lead to the construction of an accurate internal model of the mapping between the steering movement and the vestibular reafference. Participants (n = 15) were seated on a linear motion platform and actively controlled the platform's velocity using a steering wheel to translate their body to a memorized visual target location (Motion condition). We compared their steering behavior to that of participants (n = 15) who remained stationary and instead aligned a non-visible line with the target (Stationary condition). To probe learning, the gain between the steering wheel angle and the platform velocity or line velocity changed abruptly twice during the experiment. These gain changes were virtually undetectable in the displacement error in the Motion condition, whereas clear deviations were observed in the Stationary condition. These results show that participants in the Motion condition made within-trial changes to their steering behavior immediately after the gain changes. This suggests that they continuously compared the vestibular reafference to internal predictions, and thus employed and updated an internal forward model of the mapping between the steering movement and the vestibular reafference.