Search citation statements
Paper Sections
Citation Types
Year Published
Publication Types
Relationship
Authors
Journals
Motor imagery (MI) is a frequently used “mental trigger” for non-invasive brain-computer interfaces (BCI). Numerous studies have examined the effectiveness of MI-BCI for post-stroke rehabilitation. However, the results remain inconclusive. A potential obstacle to the effectiveness of this method could stem from an ongoing debate between the internal focus of mental activity (i.e., modeling of reality) inherent in MI and the perceived significance of sensory feedback from the actual physical environment in BCI-facilitated therapy. The requirement to allocate attentional resources to both internal actions and external consequences may contribute to the low accuracy of MI-BCI classifiers in most users. Moreover, internal focus of attention in MI may partially account for the consistent failures in combining MI-BCI with eye tracker-based interaction technologies, since external focus of attention is crucial for gaze control. A potentially effective replacement for motor imagery in BCIs is attempted movements (AMs). Studies have shown that BCIs are more successful in decoding AMs than MI (e.g., [1]). AMs involve attempted, but unrealized movements caused by paralysis or amputation. Despite their potential, AMs have received little attention, possibly because of modeling challenges with healthy participants and the widespread popularity of MI-BCIs. One approach to modeling them in healthy subjects is to use quasi-movements (QM), which are voluntary movements that are minimized by the subject to such an extent that they eventually become undetectable by objective measures [2]. However, QM has been studied even less than AM since its discovery by V.V. Nikulin and colleagues [2]. This may be due to the inadequate understanding of the difference between QM and MI. We recently proved that the sensorimotor rhythm’s event-related desynchronization in QM does not rely on the residual electromyogram (EMG), indicating that strict EMG control, which is often impossible, may not be necessary for QM contrary to prior views. As a result, QM can be embraced more frequently as an alternative to MI in BCIs [3]. Moreover, we substantiated and refined earlier findings [2] that QM possesses a striking similarity to actual movements [4]. Here, we present our initial findings on the asynchronous classification of QM. These findings may serve as a foundation for a real-time QM-BCI system. We used the EEG data recorded from 23 participants who synchronized their QM and MI with rhythmic sound triplets. A convolutional neural network called SimpleNet [5], with high interpretability, was trained on a subset of individual data separately for QM and IM, compared to a referential non-motor task. The network was applied offline and was unaware of sound timing, to another subset in 1.5-second windows with 0.1-second steps. QM/IM were detected when four consecutive positive windows occurred with a refractory period of three seconds. Due to the high variability in MI-BCI performance among untrained individuals, we only assessed classifier performance amongst participants exhibiting a TPR (true positive rate) greater than 0.5. We identified 7 such participants in QM and 5 in MI. QM showed better intention detection than MI, though not significantly, according to the Mann-Whitney test. The mean values ± standard deviation for TPR were 0.81±0.12 in QM and 0.77±0.12 in MI, while the false alarm rate (s–1) was 0.03±0.02 in QM and 0.04±0.03 in MI, and the response time (s) was 2.81±0.06 in QM and 2.86±0.10 in MI. Our initial findings on asynchronous BCI modeling are consistent with previous studies that have demonstrated superior QM classification in synchronous paradigms as compared to MI [2]. Notably, only minor hyperparameter optimization was performed for SimpleNet, leaving ample room for improvement in classification. Given the superior classification of AM over MI [1] and the promising preliminary findings presented here, the use of AM by end-users appears to be a viable option. Utilizing QM in studies that model AM could likely lead to further promotion and development of this technique. Additionally, the comparable nature of QM, AM, and overt movement implies their applicability in conveying intention via gaze-controlled interfaces. Although overt motor confirmation is suitable, MI-BCIs have shown inadequacies in this regard.
Motor imagery (MI) is a frequently used “mental trigger” for non-invasive brain-computer interfaces (BCI). Numerous studies have examined the effectiveness of MI-BCI for post-stroke rehabilitation. However, the results remain inconclusive. A potential obstacle to the effectiveness of this method could stem from an ongoing debate between the internal focus of mental activity (i.e., modeling of reality) inherent in MI and the perceived significance of sensory feedback from the actual physical environment in BCI-facilitated therapy. The requirement to allocate attentional resources to both internal actions and external consequences may contribute to the low accuracy of MI-BCI classifiers in most users. Moreover, internal focus of attention in MI may partially account for the consistent failures in combining MI-BCI with eye tracker-based interaction technologies, since external focus of attention is crucial for gaze control. A potentially effective replacement for motor imagery in BCIs is attempted movements (AMs). Studies have shown that BCIs are more successful in decoding AMs than MI (e.g., [1]). AMs involve attempted, but unrealized movements caused by paralysis or amputation. Despite their potential, AMs have received little attention, possibly because of modeling challenges with healthy participants and the widespread popularity of MI-BCIs. One approach to modeling them in healthy subjects is to use quasi-movements (QM), which are voluntary movements that are minimized by the subject to such an extent that they eventually become undetectable by objective measures [2]. However, QM has been studied even less than AM since its discovery by V.V. Nikulin and colleagues [2]. This may be due to the inadequate understanding of the difference between QM and MI. We recently proved that the sensorimotor rhythm’s event-related desynchronization in QM does not rely on the residual electromyogram (EMG), indicating that strict EMG control, which is often impossible, may not be necessary for QM contrary to prior views. As a result, QM can be embraced more frequently as an alternative to MI in BCIs [3]. Moreover, we substantiated and refined earlier findings [2] that QM possesses a striking similarity to actual movements [4]. Here, we present our initial findings on the asynchronous classification of QM. These findings may serve as a foundation for a real-time QM-BCI system. We used the EEG data recorded from 23 participants who synchronized their QM and MI with rhythmic sound triplets. A convolutional neural network called SimpleNet [5], with high interpretability, was trained on a subset of individual data separately for QM and IM, compared to a referential non-motor task. The network was applied offline and was unaware of sound timing, to another subset in 1.5-second windows with 0.1-second steps. QM/IM were detected when four consecutive positive windows occurred with a refractory period of three seconds. Due to the high variability in MI-BCI performance among untrained individuals, we only assessed classifier performance amongst participants exhibiting a TPR (true positive rate) greater than 0.5. We identified 7 such participants in QM and 5 in MI. QM showed better intention detection than MI, though not significantly, according to the Mann-Whitney test. The mean values ± standard deviation for TPR were 0.81±0.12 in QM and 0.77±0.12 in MI, while the false alarm rate (s–1) was 0.03±0.02 in QM and 0.04±0.03 in MI, and the response time (s) was 2.81±0.06 in QM and 2.86±0.10 in MI. Our initial findings on asynchronous BCI modeling are consistent with previous studies that have demonstrated superior QM classification in synchronous paradigms as compared to MI [2]. Notably, only minor hyperparameter optimization was performed for SimpleNet, leaving ample room for improvement in classification. Given the superior classification of AM over MI [1] and the promising preliminary findings presented here, the use of AM by end-users appears to be a viable option. Utilizing QM in studies that model AM could likely lead to further promotion and development of this technique. Additionally, the comparable nature of QM, AM, and overt movement implies their applicability in conveying intention via gaze-controlled interfaces. Although overt motor confirmation is suitable, MI-BCIs have shown inadequacies in this regard.
Despite the prevalence of visuomotor transformations in our motor skills, their mechanisms remain incompletely understood, especially when imagery actions are considered such as mentally picking up a cup or pressing a button. Here, we used a stimulus–response task to directly compare the visuomotor transformation underlying overt and imagined button presses. Electroencephalographic activity was recorded while participants responded to highlights of the target button while ignoring the second, non-target button. Movement-related potentials (MRPs) and event-related desynchronization occurred for both overt movements and motor imagery (MI), with responses present even for non-target stimuli. Consistent with the activity accumulation model where visual stimuli are evaluated and transformed into the eventual motor response, the timing of MRPs matched the response time on individual trials. Activity-accumulation patterns were observed for MI, as well. Yet, unlike overt movements, MI-related MRPs were not lateralized, which appears to be a neural marker for the distinction between generating a mental image and transforming it into an overt action. Top-down response strategies governing this hemispheric specificity should be accounted for in future research on MI, including basic studies and medical practice.
The neural mechanisms underlying motor preparation have attracted much attention, particularly because of the assertion that they are similar to the mechanisms of motor imagery (MI), a technique widely used in motor rehabilitation and brain-computer interfaces (BCIs). Here we clarified the process of visuomotor transformation for the real and imagined movements by analyzing EEG responses that were time locked to the appearance of visual targets and movement onsets. The experimental task required responding to target stimuli with button presses or imagined button presses while ignoring distractors. We examined how different components of movement-related potentials (MRPs) varied depending on the reaction time (RT) and interpreted the findings in terms of the motor noise accumulation hypothesis. Furthermore, we compared MRPs and event-related desynchronization (ERD) for overt motor actions versus motor imagery. For the MRPs, we distinguished lateralized readiness potentials (LRPs) and reafferent potentials (RAPs). While MRPs were similar for the real and imagined movements, imagery-related potentials were not lateralized. The amplitude of the late potentials that developed during motor imagery at the same time RAPs occurred during real movements was correlated with the amplitude of β-ERD. As such they could have represented sensorimotor activation triggered by the imagery. LRPs that occurred during real movements lasted longer for longer RTs, which is consistent with activity accumulation in the motor cortex prior to overt action onset. LRPs occurred for non-target stimuli, as well, but they were small and short lived. We interpret these results in terms of a visuomotor transformation, where information flows from visual to motor areas and results in a movement, a decision not to move and/or a mental image of a movement. The amplitude of the late positive peak that developed during MI was correlated with the amplitude of the β-ERD. Since the latency of this component was consistent with the timing of RAP, we suggest that it is a non-lateralized RAP-like component associated with sensorimotor activation during kinesthetic MI.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.