Childhood and adolescence are critical stages of the human lifespan, in which fundamental neural reorganizational processes take place. A substantial body of literature investigated accompanying neurophysiological changes, focusing on the most dominant feature of the human EEG signal: the alpha oscillation. Recent developments in EEG signal-processing show that conventional measures of alpha power are confounded by various factors and need to be decomposed into periodic and aperiodic components, which represent distinct underlying brain mechanisms. It is therefore unclear how each part of the signal changes during brain maturation. Using multivariate Bayesian generalized linear models, we examined aperiodic and periodic parameters of alpha activity in the largest openly available pediatric dataset (N=2529, age 5-22 years) and replicated these findings in a preregistered analysis of an independent validation sample (N=369, age 6-22 years). First, the welldocumented age-related decrease in total alpha power was replicated. However, when controlling for the aperiodic signal component, our findings provided strong evidence for an age-related increase in the aperiodic-adjusted alpha power. As reported in previous studies, also relative alpha power revealed a maturational increase, yet indicating an underestimation of the underlying relationship between periodic alpha power and brain maturation. The aperiodic intercept and slope decreased with increasing age and were highly correlated with total alpha power. Consequently, earlier interpretations on age-related changes of total alpha power need to be reconsidered, as elimination of active synapses rather links to decreases in the aperiodic intercept. Instead, analyses of diffusion tensor imaging data indicate that the maturational increase in aperiodic-adjusted alpha power is related to increased thalamocortical connectivity. Functionally, our results suggest that increased thalamic control of cortical alpha power is linked to improved attentional performance during brain maturation.
Abstract-Eye tracking is a powerful mean for assistive technologies for people with movement disorders, paralysis and amputees. We present a highly intuitive eye tracking-controlled robot arm operating in 3-dimensional space based on the user's gaze target point that enables tele-writing and drawing. The usability and intuitive usage was assessed by a "tele"writing experiment with 8 subjects that learned to operate the system within minutes of first time use. These subjects were naive to the system and the task and had to write three letters on a white board with a white board pen attached to the robot arm's endpoint. The instructions are to imagine they were writing text with the pen and look where the pen would be going, they had to write the letters as fast and as accurate as possible, given a letter size template. Subjects were able to perform the task with facility and accuracy, and movements of the arm did not interfere with subjects ability to control their visual attention so as to enable smooth writing. On the basis of five consecutive trials there was a significant decrease in the total time used and the total number of commands sent to move the robot arm from the first to the second trial but no further improvement thereafter, suggesting that within writing 6 letters subjects had mastered the ability to control the system. Our work demonstrates that eye tracking is a powerful means to control robot arms in closed-loop and real-time, outperforming other invasive and non-invasive approaches to Brain-MachineInterfaces in terms of calibration time (<2 minutes), training time (<10 minutes), interface technology costs. We suggests that gaze-based decoding of action intention may well become one of the most efficient ways to interface with robotic actuators -i.e. Brain-Robot-Interfaces -and become useful beyond paralysed and amputee users also for the general teleoperation of robotic and exoskeleton in human augmentation.
Eye-movements are the only directly observable behavioural signals that are highly correlated with actions at the task level, and proactive of body movements and thus reflect action intentions. Moreover, eye movements are preserved in many movement disorders leading to paralysis (or amputees) from stroke, spinal cord injury, Parkinson's disease, multiple sclerosis, and muscular dystrophy among others. Despite this benefit, eye tracking is not widely used as control interface for robotic interfaces in movement impaired patients due to poor human-robot interfaces. We demonstrate here how combining 3D gaze tracking using our GT3D binocular eye tracker with custom designed 3D head tracking system and calibration method enables continuous 3D end-point control of a robotic arm support system. The users can move their own hand to any location of the workspace by simple looking at the target and winking once. This purely eye tracking based system enables the end-user to retain free head movement and yet achieves high spatial end point accuracy in the order of 6 cm RMSE error in each dimension and standard deviation of 4 cm. 3D calibration is achieved by moving the robot along a 3 dimensional space filling Peano curve while the user is tracking it with their eyes. This results in a fully automated calibration procedure that yields several thousand calibration points versus standard approaches using a dozen points, resulting in beyond state-of-the-art 3D accuracy and precision.
The ability of robotic rehabilitation devices to support paralysed end-users is ultimately limited by the degree to which human-machine-interaction is designed to be effective and efficient in translating user intention into robotic action. Specifically, we evaluate the novel possibility of binocular eye-tracking technology to detect voluntary winks from involuntary blink commands, to establish winks as a novel low-latency control signal to trigger robotic action. By wearing binocular eye-tracking glasses we enable users to directly observe their environment or the actuator and trigger movement actions, without having to interact with a visual display unit or user interface. We compare our novel approach to two conventional approaches for controlling robotic devices based on electromyo-graphy (EMG) and speech-based human-computer interaction technology. We present an integrated software framework based on ROS that allows transparent integration of these multiple modalities with a robotic system. We use a soft-robotic SEM glove (Bioservo Technologies AB, Sweden) to evaluate how the 3 modalities support the performance and subjective experience of the end-user when movement assisted. All 3 modalities are evaluated in streaming, closed-loop control operation for grasping physical objects. We find that wink control shows the lowest error rate mean with lowest standard deviation of (0.23 ± 0.07, mean ± SEM) followed by speech control (0.35 ± 0. 13) and EMG gesture control (using the Myo armband by Thalamic Labs), with the highest mean and standard deviation (0.46 ± 0.16). We conclude that with our novel own developed eye-tracking based approach to control assistive technologies is a well suited alternative to conventional approaches, especially when combined with 3D eye-tracking based robotic end-point control.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.