Virtual Reality (VR) has emerged as a promising tool in many domains of therapy and rehabilitation, and has recently attracted the attention of researchers and clinicians working with elderly people with MCI, Alzheimer’s disease and related disorders. Here we present a study testing the feasibility of using highly realistic image-based rendered VR with patients with MCI and dementia. We designed an attentional task to train selective and sustained attention, and we tested a VR and a paper version of this task in a single-session within-subjects design. Results showed that participants with MCI and dementia reported to be highly satisfied and interested in the task, and they reported high feelings of security, low discomfort, anxiety and fatigue. In addition, participants reported a preference for the VR condition compared to the paper condition, even if the task was more difficult. Interestingly, apathetic participants showed a preference for the VR condition stronger than that of non-apathetic participants. These findings suggest that VR-based training can be considered as an interesting tool to improve adherence to cognitive training in elderly people with cognitive impairment.
BackgroundVirtual reality (VR) opens up a vast number of possibilities in many domains of therapy. The primary objective of the present study was to evaluate the acceptability for elderly subjects of a VR experience using the image-based rendering virtual environment (IBVE) approach and secondly to test the hypothesis that visual cues using VR may enhance the generation of autobiographical memories.MethodsEighteen healthy volunteers (mean age 68.2 years) presenting memory complaints with a Mini-Mental State Examination score higher than 27 and no history of neuropsychiatric disease were included. Participants were asked to perform an autobiographical fluency task in four conditions. The first condition was a baseline grey screen, the second was a photograph of a well-known location in the participant’s home city (FamPhoto), and the last two conditions displayed VR, ie, a familiar image-based virtual environment (FamIBVE) consisting of an image-based representation of a known landmark square in the center of the city of experimentation (Nice) and an unknown image-based virtual environment (UnknoIBVE), which was captured in a public housing neighborhood containing unrecognizable building fronts. After each of the four experimental conditions, participants filled in self-report questionnaires to assess the task acceptability (levels of emotion, motivation, security, fatigue, and familiarity). CyberSickness and Presence questionnaires were also assessed after the two VR conditions. Autobiographical memory was assessed using a verbal fluency task and quality of the recollection was assessed using the “remember/know” procedure.ResultsAll subjects completed the experiment. Sense of security and fatigue were not significantly different between the conditions with and without VR. The FamPhoto condition yielded a higher emotion score than the other conditions (P<0.05). The CyberSickness questionnaire showed that participants did not experience sickness during the experiment across the VR conditions. VR stimulates autobiographical memory, as demonstrated by the increased total number of responses on the autobiographical fluency task and the increased number of conscious recollections of memories for familiar versus unknown scenes (P<0.01).ConclusionThe study indicates that VR using the FamIBVE system is well tolerated by the elderly. VR can also stimulate recollections of autobiographical memory and convey familiarity of a given scene, which is an essential requirement for use of VR during reminiscence therapy.
Praxis test is a gesture-based diagnostic test which has been accepted as diagnostically indicative of cortical pathologies such as Alzheimer's disease. Despite being simple, this test is oftentimes skipped by the clinicians. In this paper, we propose a novel framework to investigate the potential of static and dynamic upper-body gestures based on the Praxis test and their potential in a medical framework to automatize the test procedures for computer-assisted cognitive assessment of older adults. In order to carry out gesture recognition as well as correctness assessment of the performances we have recolected a novel challenging RGB-D gesture video dataset recorded by Kinect v2, which contains 29 specific gestures suggested by clinicians and recorded from both experts and patients performing the gesture set. Moreover, we propose a framework to learn the dynamics of upper-body gestures, considering the videos as sequences of short-term clips of gestures. Our approach first uses body part detection to extract image patches surrounding the hands and then, by means of a finetuned convolutional neural network (CNN) model, it learns deep hand features which are then linked to a long short-term memory to capture the temporal dependencies between video frames.
Abstract-Immersive spaces such as 4-sided displays with stereo viewing and high-quality tracking provide a very engaging and realistic virtual experience. However, walking is inherently limited by the restricted physical space, both due to the screens (limited translation) and the missing back screen (limited rotation). In this paper, we propose three novel locomotion techniques that have three concurrent goals: keep the user safe from reaching the translational and rotational boundaries; increase the amount of real walking and finally, provide a more enjoyable and ecological interaction paradigm compared to traditional controller-based approaches. We notably introduce the "Virtual Companion", which uses a small bird to guide the user through VEs larger than the physical space. We evaluate the three new techniques through a user study with travel-to-target and path following tasks. The study provides insight into the relative strengths of each new technique for the three aforementioned goals. Specifically, if speed and accuracy are paramount, traditional controller interfaces augmented with our novel warning techniques may be more appropriate; if physical walking is more important, two of our paradigms (extended Magic Barrier Tape and Constrained Wand) should be preferred; last, fun and ecological criteria would favor the Virtual Companion.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.