Even though reciprocal inhibitory vestibular interactions following visual stimulation have been understood as sensory-reweighting mechanisms to stabilize motion perception; this hypothesis has not been thoroughly investigated with temporal dynamic measurements. Recently, virtual reality technology has been implemented in different medical domains. However, exposure in virtual reality environments can cause discomfort, including nausea or headache, due to visual-vestibular conflicts. We speculated that self-motion perception could be altered by accelerative visual motion stimulation in the virtual reality situation because of the absence of vestibular signals (visual-vestibular sensory conflict), which could result in the sickness. The current study investigated spatio-temporal profiles for motion perception using immersive virtual reality. We demonstrated alterations in neural dynamics under the sensory mismatch condition (accelerative visual motion stimulation) and in participants with high levels of sickness after driving simulation. Additionally, an event-related potentials study revealed that the high-sickness group presented with higher P3 amplitudes in sensory mismatch conditions, suggesting that it would be a substantial demand of cognitive resources for motion perception on sensory mismatch conditions.
This result pattern suggests that a moderate amount of auditory training using the mobile device with cost-effective and minimal supervision is useful when it is used to improve the speech understanding of older adults with hearing loss. Geriatr Gerontol Int 2017; 17: 61-68.
Objectives: To train participants to localize sound using virtual reality (VR) technology, appropriate auditory stimuli that contain accurate spatial cues are essential. The generic head-related transfer function (HRTF) that grounds the programmed spatial audio in VR does not reflect individual variation in monaural spatial cues, which is critical for auditory spatial perception in patients with single-sided deafness (SSD). As binaural difference cues are unavailable, auditory spatial perception is a typical problem in the SSD population that warrants intervention. This study assessed the applicability of binaurally recorded auditory stimuli in VR-based training for sound localization in SSD patients. Methods: Sixteen subjects with SSD and thirty-eight normal-hearing controls (NHs) underwent VR-based training for sound localization and were assessed three weeks after completing training. The VR program incorporated prerecorded auditory stimuli created individually in the SSD group and over an anthropometric model in the NH group. Results: Sound localization performance revealed significant improvement in both groups after training, with retained benefits lasting for an additional three weeks. Subjective improvements in spatial hearing were confirmed in the SSD group. Conclusion: By examining individuals with SSD and NH, VR-based training for sound localization that used binaurally recorded stimuli, measured individually was presented as effective and beneficial. Furthermore, VR-based training does not require sophisticated instruments or setups. These results suggest that this technique represents a new therapeutic treatment for impaired sound localization.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.