Lab-based electroencephalography (EEG) techniques have matured over decades of research and can produce high-quality scientific data. It is often assumed that the specific choice of EEG system has limited impact on the data and does not add variance to the results. However, many low cost and mobile EEG systems are now available, and there is some doubt as to the how EEG data vary across these newer systems. We sought to determine how variance across systems compares to variance across subjects or repeated sessions. We tested four EEG systems: two standard research-grade systems, one system designed for mobile use with dry electrodes, and an affordable mobile system with a lower channel count. We recorded four subjects three times with each of the four EEG systems. This setup allowed us to assess the influence of all three factors on the variance of data. Subjects performed a battery of six short standard EEG paradigms based on event-related potentials (ERPs) and steady-state visually evoked potential (SSVEP). Results demonstrated that subjects account for 32% of the variance, systems for 9% of the variance, and repeated sessions for each subject-system combination for 1% of the variance. In most lab-based EEG research, the number of subjects per study typically ranges from 10 to 20, and error of uncertainty in estimates of the mean (like ERP) will improve by the square root of the number of subjects. As a result, the variance due to EEG system (9%) is of the same order of magnitude as variance due to subjects (32%/sqrt(16) = 8%) with a pool of 16 subjects. The two standard research-grade EEG systems had no significantly different means from each other across all paradigms. However, the two other EEG systems demonstrated different mean values from one or both of the two standard research-grade EEG systems in at least half of the paradigms. In addition to providing specific estimates of the variability across EEG systems, subjects, and repeated sessions, we also propose a benchmark to evaluate new mobile EEG systems by means of ERP responses.
In the NIPS 2017 Learning to Run challenge, participants were tasked with building a controller for a musculoskeletal model to make it run as fast as possible through an obstacle course. Top participants were invited to describe their algorithms. In this work, we present eight solutions that used deep reinforcement learning approaches, based on algorithms such as Deep Deterministic Policy Gradient, Proximal Policy Optimization, and Trust Region Policy Optimization. Many solutions use similar relaxations and heuristics, such as reward shaping, frame skipping, discretization of the action space, symmetry, and policy blending. However, each of the eight teams implemented different modifications of the known algorithms.
Theories of embodied cognition postulate that the world can serve as an external memory. This implies that instead of storing visual information in working memory the information may be equally retrieved by appropriate eye movements. Given this assumption, the question arises, how we balance the effort of memorization with the effort of visual sampling our environment. We analyzed eye-tracking data in a sensorimotor task where participants had to produce a copy of a LEGO®-blocks-model displayed on a computer screen. In the unconstrained condition, the model appeared immediately after eye-fixation on the model. In the constrained condition, we introduced a 0.7 s delay before uncovering the model. The model disappeared as soon as participants made a saccade outside of the Model Area. To successfully copy a model of 8 blocks participants made saccades to the Model Area on average 7.9 times in the unconstrained condition and 5.2 times in the constrained condition. However, the mean duration of a trial was 2.9 s (14%) longer in the constrained condition even when taking into account the delayed visibility of the model. In this study, we found evidence for an adaptive shift in subjects’ behavior toward memorization by introducing a price for a certain type of saccades. However, the response is not adaptive; it is maladaptive, as memorization leads to longer overall performance time.
Sensorimotor processing is a critical function of the human brain with multiple cortical areas specialised for sensory recognition or motor execution. Although there has been considerable research into sensorimotor control in humans, the steps between sensory recognition and motor execution are not fully understood. To provide insight into brain areas responsible for sensorimotor computation, we used complex categorization-response tasks (variations of a Stroop task requiring recognition, decision-making, and motor responses) to test the hypothesis that some functional modules are participating in both sensory as well as motor processing. We operationalize functional modules as independent components (ICs) yielded by an independent component analysis (ICA) of EEG data and measured event-related responses by means of inter-trial coherence (ITC). Our results consistently found ICs with event-related ITC responses related to both sensory stimulation and motor response onsets (on average 5.8 ICs per session). These findings reveal EEG correlates of tightly coupled sensorimotor processing in the human brain, and support frameworks like embodied cognition, common coding, and sensorimotor contingency that do not sequentially separate sensory and motor brain processes.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.