High dream recallers (HR) show a larger brain reactivity to auditory stimuli during wakefulness and sleep as compared to low dream recallers (LR) and also more intra-sleep wakefulness (ISW), but no other modification of the sleep macrostructure. To further understand the possible causal link between brain responses, ISW and dream recall, we investigated the sleep microstructure of HR and LR, and tested whether the amplitude of auditory evoked potentials (AEPs) was predictive of arousing reactions during sleep. Participants (18 HR, 18 LR) were presented with sounds during a whole night of sleep in the lab and polysomnographic data were recorded. Sleep microstructure (arousals, rapid eye movements (REMs), muscle twitches (MTs), spindles, KCs) was assessed using visual, semi-automatic and automatic validated methods. AEPs to arousing (awakenings or arousals) and non-arousing stimuli were subsequently computed. No between-group difference in the microstructure of sleep was found. In N2 sleep, auditory arousing stimuli elicited a larger parieto-occipital positivity and an increased late frontal negativity as compared to non-arousing stimuli. As compared to LR, HR showed more arousing stimuli and more long awakenings, regardless of the sleep stage but did not show more numerous or longer arousals. These results suggest that the amplitude of the brain response to stimuli during sleep determine subsequent awakening and that awakening duration (and not arousal) is the critical parameter for dream recall. Notably, our results led us to propose that the minimum necessary duration of an awakening during sleep for a successful encoding of dreams into long-term memory is approximately 2 min.
A novel framework for joint detection of sleep spindles and K-complex events, two hallmarks of sleep stage S2, is proposed. Sleep electroencephalography (EEG) signals are split into oscillatory (spindles) and transient (K-complex) components. This decomposition is conveniently achieved by applying morphological component analysis (MCA) to a sparse representation of EEG segments obtained by the recently introduced discrete tunable Q-factor wavelet transform (TQWT). Tuning the Q-factor provides a convenient and elegant tool to naturally decompose the signal into an oscillatory and a transient component. The actual detection step relies on thresholding (i) the transient component to reveal K-complexes and (ii) the time-frequency representation of the oscillatory component to identify sleep spindles. Optimal thresholds are derived from ROC-like curves (sensitivity vs. FDR) on training sets and the performance of the method is assessed on test data sets. We assessed the performance of our method using full-night sleep EEG data we collected from 14 participants. In comparison to visual scoring (Expert 1), the proposed method detected spindles with a sensitivity of 83.18% and false discovery rate (FDR) of 39%, while K-complexes were detected with a sensitivity of 81.57% and an FDR of 29.54%. Similar performances were obtained when using a second expert as benchmark. In addition, when the TQWT and MCA steps were excluded from the pipeline the detection sensitivities dropped down to 70% for spindles and to 76.97% for K-complexes, while the FDR rose up to 43.62 and 49.09%, respectively. Finally, we also evaluated the performance of the proposed method on a set of publicly available sleep EEG recordings. Overall, the results we obtained suggest that the TQWT-MCA method may be a valuable alternative to existing spindle and K-complex detection methods. Paths for improvements and further validations with large-scale standard open-access benchmarking data sets are discussed.
We present Visbrain, a Python open-source package that offers a comprehensive visualization suite for neuroimaging and electrophysiological brain data. Visbrain consists of two levels of abstraction: (1) objects which represent highly configurable neuro-oriented visual primitives (3D brain, sources connectivity, etc.) and (2) graphical user interfaces for higher level interactions. The object level offers flexible and modular tools to produce and automate the production of figures using an approach similar to that of Matplotlib with subplots. The second level visually connects these objects by controlling properties and interactions through graphical interfaces. The current release of Visbrain (version 0.4.2) contains 14 different objects and three responsive graphical user interfaces, built with PyQt: Signal , for the inspection of time-series and spectral properties, Brain for any type of visualization involving a 3D brain and Sleep for polysomnographic data visualization and sleep analysis. Each module has been developed in tight collaboration with end-users, i.e., primarily neuroscientists and domain experts, who bring their experience to make Visbrain as transparent as possible to the recording modalities (e.g., intracranial EEG, scalp-EEG, MEG, anatomical and functional MRI). Visbrain is developed on top of VisPy, a Python package providing high-performance 2D and 3D visualization by leveraging the computational power of the graphics card. Visbrain is available on Github and comes with a documentation, examples, and datasets ( http://visbrain.org ).
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.