Unilateral spatial neglect is a disabling condition frequently occurring after stroke. People with neglect suffer from various spatial deficits in several modalities, which in many cases impair everyday functioning. A successful treatment is yet to be found. Several techniques have been proposed in the last decades, but only a few showed long-lasting effects and none could completely rehabilitate the condition. Diagnostic methods of neglect could be improved as well. The disorder is normally diagnosed with pen-and-paper methods, which generally do not assess patients in everyday tasks and do not address some forms of the disorder. Recently, promising new methods based on virtual reality have emerged. Virtual reality technologies hold great opportunities for the development of effective assessment and treatment techniques for neglect because they provide rich, multimodal, and highly controllable environments. In order to stimulate advancements in this domain, we present a review and an analysis of the current work. We describe past and ongoing research of virtual reality applications for unilateral neglect and discuss the existing problems and new directions for development.
The present study examined when and how the ability to cross-modally match audio-visual fluent speech develops in 4.5-, 6- and 12-month-old German-learning infants. In Experiment 1, 4.5- and 6-month-old infants’ audio-visual matching ability of native (German) and non-native (French) fluent speech was assessed by presenting auditory and visual speech information sequentially, that is, in the absence of temporal synchrony cues. The results showed that 4.5-month-old infants were capable of matching native as well as non-native audio and visual speech stimuli, whereas 6-month-olds perceived the audio-visual correspondence of native language stimuli only. This suggests that intersensory matching narrows for fluent speech between 4.5 and 6 months of age. In Experiment 2, auditory and visual speech information was presented simultaneously, therefore, providing temporal synchrony cues. Here, 6-month-olds were found to match native as well as non-native speech indicating facilitation of temporal synchrony cues on the intersensory perception of non-native fluent speech. Intriguingly, despite the fact that audio and visual stimuli cohered temporally, 12-month-olds matched the non-native language only. Results were discussed with regard to multisensory perceptual narrowing during the first year of life.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.