We could show that it is possible to discriminate three executed reach-and-grasp actions prominent in people's everyday use from non-invasive EEG. Underlying neural correlates showed significant differences between all tested conditions. These findings will eventually contribute to our attempt of controlling a neuroprosthesis in a natural and intuitive way, which could ultimately benefit motor impaired end users in their daily life actions.
Our detection model operates in a continuous manner, which makes it a straightforward asset for rehabilitation scenarios. By using both temporal and spectral information we attained higher detection rates than the ones obtained with the MRCP and ERD detection models, both during the intrasession and intersession conditions.
Using low-frequency time-domain electroencephalographic (EEG) signals we show, for the same type of upper limb movement, that goal-directed movements have different neural correlates than movements without a particular goal. In a reach-and-touch task, we explored the differences in the movement-related cortical potentials (MRCPs) between goal-directed and non-goal-directed movements. We evaluated if the detection of movement intention was influenced by the goal-directedness of the movement. In a single-trial classification procedure we found that classification accuracies are enhanced if there is a goal-directed movement in mind. Furthermore, by using the classifier patterns and estimating the corresponding brain sources, we show the importance of motor areas and the additional involvement of the posterior parietal lobule in the discrimination between goal-directed movements and non-goal-directed movements. We discuss next the potential contribution of our results on goal-directed movements to a more reliable brain-computer interface (BCI) control that facilitates recovery in spinal-cord injured or stroke end-users.
Movement decoders exploit the tuning of neural activity to various movement parameters with the ultimate goal of controlling end-effector action. Invasive approaches, typically relying on spiking activity, have demonstrated feasibility. Results of recent functional neuroimaging studies suggest that information about movement parameters is even accessible non-invasively in the form of low-frequency brain signals. However, their spatiotemporal tuning characteristics to single movement parameters are still unclear. Here, we extend the current understanding of low-frequency electroencephalography (EEG) tuning to position and velocity signals. We recorded EEG from 15 healthy participants while they performed visuomotor and oculomotor pursuit tracking tasks. Linear decoders, fitted to EEG signals in the frequency range of the tracking movements, predicted positions and velocities with moderate correlations (0.2–0.4; above chance level) in both tasks. Predictive activity in terms of decoder patterns was significant in superior parietal and parieto-occipital areas in both tasks. By contrasting the two tracking tasks, we found that predictive activity in contralateral primary sensorimotor and premotor areas exhibited significantly larger tuning to end-effector velocity when the visuomotor tracking task was performed.
Objective. Continuous decoding of voluntary movement is desirable for closed-loop, natural control of neuroprostheses. Recent studies showed the possibility to reconstruct the hand trajectories from low-frequency (LF) electroencephalographic (EEG) signals. So far this has only been performed offline. Here, we attempt for the first time continuous online control of a robotic arm with LF-EEG-based decoded movements. Approach. The study involved ten healthy participants, asked to track a moving target by controlling a robotic arm. At the beginning of the experiment, the robot was fully controlled by the participant’s hand trajectories. After calibrating the decoding model, that control was gradually replaced by LF-EEG-based decoded trajectories, first with 33%, 66% and finally 100% EEG control. Likewise with other offline studies, we regressed the movement parameters (two-dimensional positions, velocities, and accelerations) from the EEG with partial least squares (PLS) regression. To integrate the information from the different movement parameters, we introduced a combined PLS and Kalman filtering approach (named PLSKF). Main results. We obtained moderate yet overall significant (α = 0.05) online correlations between hand kinematics and PLSKF-decoded trajectories of 0.32 on average. With respect to PLS regression alone, the PLSKF had a stable correlation increase of Δr = 0.049 on average, demonstrating the successful integration of different models. Parieto-occipital activations were highlighted for the velocity and acceleration decoder patterns. The level of robot control was above chance in all conditions. Participants finally reported to feel enough control to be able to improve with training, even in the 100% EEG condition. Significance. Continuous LF-EEG-based movement decoding for the online control of a robotic arm was achieved for the first time. The potential bottlenecks arising when switching from offline to online decoding, and possible solutions, were described. The effect of the PLSKF and its extensibility to different experimental designs were discussed.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.