When people walk side-by-side, they often synchronize their steps. To achieve this, individuals might cross-modally match audiovisual signals from the movements of the partner and kinesthetic, cutaneous, visual and auditory signals from their own movements. Because signals from different sensory systems are processed with noise and asynchronously, the challenge of the CNS is to derive the best estimate based on this conflicting information. This is currently thought to be done by a mechanism operating as a Maximum Likelihood Estimator (MLE). The present work investigated whether audiovisual signals from the partner are integrated according to MLE in order to synchronize steps during walking. Three experiments were conducted in which the sensory cues from a walking partner were virtually simulated. In Experiment 1 seven participants were instructed to synchronize with human-sized Point Light Walkers and/or footstep sounds. Results revealed highest synchronization performance with auditory and audiovisual cues. This was quantified by the time to achieve synchronization and by synchronization variability. However, this auditory dominance effect might have been due to artifacts of the setup. Therefore, in Experiment 2 human-sized virtual mannequins were implemented. Also, audiovisual stimuli were rendered in real-time and thus were synchronous and co-localized. All four participants synchronized best with audiovisual cues. For three of the four participants results point toward their optimal integration consistent with the MLE model. Experiment 3 yielded performance decrements for all three participants when the cues were incongruent. Overall, these findings suggest that individuals might optimally integrate audiovisual cues to synchronize steps during side-by-side walking.
The control of human motor timing is captured by cognitive models that make assumptions about the underlying information processing mechanisms.A paradigm for its inquiry is the Sensorimotor Synchronisation (SMS) task, in which an individual is required to synchronise the movements of an effector, like the finger, with repetitive appearing onsets of an oscillating external event.The Linear Phase Correction model (LPC) is a cognitive model that captures the asynchrony dynamics between the finger taps and the event onsets. It assumes cognitive processes that are modelled as independent random variables (perceptual delays, motor delays, timer intervals).There exist methods that estimate the model parameters from the asynchronies recorded in SMS tasks. However, while many natural situations show only very short synchronisation periods, the previous methods require long asynchrony sequences to allow for unbiased estimations (see Jacoby, Tishby, Repp, Ahissar & Keller, 2015b). Also, depending on the task, long records may be hard to obtain experimentally. Moreover, in typical SMS tasks, records are repetitively taken to reduce biases. Yet, by averaging parameter estimates from multiple observations, the existing methods do not most appropriately exploit all available information. Therefore, the present work is a new approach of parameter estimation to
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.