Metacognitive reflections on one's current state of mind are largely absent during dreaming. Lucid dreaming as the exception to this rule is a rare phenomenon; however, its occurrence can be facilitated through cognitive training. A central idea of respective training strategies is to regularly question one's phenomenal experience: is the currently experienced world
real
, or just a dream? Here, we tested if such lucid dreaming training can be enhanced with dream-like virtual reality (VR): over the course of four weeks, volunteers underwent lucid dreaming training in VR scenarios comprising dream-like elements, classical lucid dreaming training or no training. We found that VR-assisted training led to significantly stronger increases in lucid dreaming compared to the no-training condition. Eye signal-verified lucid dreams during polysomnography supported behavioural results. We discuss the potential mechanisms underlying these findings, in particular the role of synthetic dream-like experiences, incorporation of VR content in dream imagery serving as memory cues, and extended dissociative effects of VR session on subsequent experiences that might amplify lucid dreaming training during wakefulness.
This article is part of the theme issue ‘Offline perception: voluntary and spontaneous perceptual experiences without matching external stimulation'.
Music is part of the cultural practice and, at the same time, is interwoven with biology through its effects on the brain and its likely evolutionary origin. Studies on music, however, are traditionally based on the humanities and often carried out in a purely historical context, without much input from neuroscience and biology. Here, we argue that lullabies are a particularly suited test case to study the biological versus cultural aspects of music.
There have been many studies on intelligent robotic systems for patients with motor impairments, where different sensor types and different human-machine interface (HMI) methods have been developed. However, these studies fail to achieve complex activity detection at the minimum sensing level. In this paper, exploratory approaches are adopted to investigate ocular activity dynamics and complex activity estimation using a single-channel EOG device. First, the stationarity of ocular activities during a static motion is investigated and some activities are found to be non-stationary. Further, no statistical difference is found between the envelope sequences in the temporal domain. However, when utilized as an alternative to a low-pass filter, high-frequency harmonic components in the frequency domain are found to improve contrasting ocular activities and the performance of the EOG-HMI-based activity detection system substantially. The activities are trained with different classifiers and their prediction success is evaluated with leave-one-session-out cross-validation. Accordingly, the two-dimensional CNN model achieved the highest performance with the accuracy of 72.35%. Furthermore, the clustering performance is assessed using unsupervised learning and the results are evaluated in terms of how well the feature sets are grouped. The system is further tested in real-time with the graphical user interface and the scores and survey data of the subjects are used to verify the effectiveness.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.