To navigate and guide adaptive behaviour in a dynamic environment, animals must accurately estimate their own motion relative to the external world. This is a fundamentally multisensory process involving integration of visual, vestibular and kinesthetic inputs. Ideal observer models, paired with careful neurophysiological investigation, helped to reveal how visual and vestibular signals are combined to support perception of linear self-motion direction, or heading. Recent work has extended these findings by emphasizing the dimension of time, both with regard to stimulus dynamics and the trade-off between speed and accuracy. Both time and certainty—i.e. the degree of confidence in a multisensory decision—are essential to the ecological goals of the system: terminating a decision process is necessary for timely action, and predicting one's accuracy is critical for making multiple decisions in a sequence, as in navigation. Here, we summarize a leading model for multisensory decision-making, then show how the model can be extended to study confidence in heading discrimination. Lastly, we preview ongoing efforts to bridge self-motion perception and navigation
per se
, including closed-loop virtual reality and active self-motion. The design of unconstrained, ethologically inspired tasks, accompanied by large-scale neural recordings, raise promise for a deeper understanding of spatial perception and decision-making in the behaving animal.
This article is part of the theme issue ‘Decision and control processes in multisensory perception’.