The perceptual upright is thought to be constructed by the central nervous system (CNS) as a vector sum; by combining estimates on the upright provided by the visual system and the body’s inertial sensors with prior knowledge that upright is usually above the head. Recent findings furthermore show that the weighting of the respective sensory signals is proportional to their reliability, consistent with a Bayesian interpretation of a vector sum (Forced Fusion, FF). However, violations of FF have also been reported, suggesting that the CNS may rely on a single sensory system (Cue Capture, CC), or choose to process sensory signals based on inferred signal causality (Causal Inference, CI). We developed a novel alternative-reality system to manipulate visual and physical tilt independently. We tasked participants (n = 36) to indicate the perceived upright for various (in-)congruent combinations of visual-inertial stimuli, and compared models based on their agreement with the data. The results favor the CI model over FF, although this effect became unambiguous only for large discrepancies (±60°). We conclude that the notion of a vector sum does not provide a comprehensive explanation of the perception of the upright, and that CI offers a better alternative.
The goal of this study is to determine how a self-avatar in virtual reality, experienced from different viewpoints on the body (at eyeor chest-height), might influence body part localization, as well as self-localization within the body. Previous literature shows that people do not locate themselves in only one location, but rather primarily in the face and the upper torso. Therefore, we aimed to determine if manipulating the viewpoint to either the height of the eyes or to the height of the chest would influence self-location estimates towards these commonly identified locations of self. In a virtual reality (VR) headset, participants were asked to point at several of their body parts (body part localization) as well as "directly at you" (self-localization) with a virtual pointer. Both pointing tasks were performed before and after a self-avatar adaptation phase where participants explored a co-located, scaled, gender-matched, and animated self-avatar. We hypothesized that experiencing a self-avatar might reduce inaccuracies in body part localization, and that viewpoint would influence pointing responses for both body part and self-localization. Participants overall pointed relatively accurately to some of their body parts (shoulders, chin, and eyes), but very inaccurately to others, with large undershooting for the hips, knees, and feet, and large overshooting for the top of the head. Self-localization was spread across the body (as well as above the * Corresponding author.
In modern workplaces with rapidly changing skill requirements, suitable training and learning environments play a key role for companies to remain competitive, effective and ensure job satisfaction. To provide an immersive, interactive, and engaging learning experience, Virtual Reality (VR) has emerged as a revolutionary technology. Especially when erroneous behaviour is associated with severe consequences or great resources, VR offers the opportunity to explore actions and visualize consequences in safely and at affordable costs. In addition, it provides an easy way to personalize educational content, learning speed, and/or format to the individual to guarantee a good fit with skills and needs. This is decisive, since insufficient or excessive workload during training sessions results in demotivation and reduced performance. In the latter case, persistent professional exhaustion, pressure to succeed and stress can lead to long-term psychological consequences for employees. Besides skill and ability, current physical conditions (e.g., illness or fatigue) and psychological states (e.g., motivation) also affect the learning performance. To identify and monitor individual mental states, Brain-Computer Interfaces (BCI) measuring neurophysiological activation patterns, e.g., with an electroencephalography (EEG), or functional near-infrared spectroscopy (fNIRS) can be integrated in a VR-learning environment. Recently, fNIRS, a mobile optical brain imaging technique, has become popular for real-world applications due to its good usability, portability, and ease of use. For the reliable online decoding of mental states, informative neuronal patterns, suitable methods for pre-processing and artefact removal, as well as efficient machine learning algorithms for the classification need to be explored. We, therefore, investigated and decoded different working memory states in a free moving fNIRS experiment presented in VR. different working memory states in a free moving fNIRS VR experiment and the possibility of decoding these states properly. 11 volunteers (four female, right-handed, mean age of 23.73, SD = 1.42, range = 21−26 years) participated in the study. The experimental task was a colour-based visuo-spatial n-back paradigm adapted from Lühmann and colleagues (2019) with a low (1-back) and high working memory load condition (3-back) and a 0-back condition as active baseline. Brain activity was recorded using the mobile NIRx NIRSport2. To capture brain activation patterns associated with working memory load, optode montage was designed to optimally cover the prefrontal cortex (PFC; in particular, dorso- and ventrolateral parts of the PFC) with some lateral restriction by the VR head-mounted display (HMD). fNIRS signals were processed using the python-toolbox mne and mne-nirs. For the decoding of working memory load, we extracted statistical features, that are peak, minimum, average, slope, peak-to-peak, and time-to-peak, from epochs of oxygenated (HbO) and deoxygenated (HbR) hemoglobin concentration per channel. A Linear Discriminant Analysis (LDA), Support Vector Machine (SVM) and Gradient Boosting classifier (XGBoost) were explored and compared to a Dummy classifier (empirical chance level). We also investigated which cortical regions contributed to the decoding when choosing single features and which feature combination was suggested to optimize performance. With this study, we aim to provide empirically supported decision recommendations to reach the next step towards future online decoding pipelines in real-world VR-based learning applications.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.