The motor imagery (MI)-based brain-computer interface (BCI) is an intuitive interface that provides control over computer applications directly from brain activity. However, it has shown poor performance compared to other BCI systems such as P300 and SSVEP BCI. Thus, this study aimed to improve MI-BCI performance by training participants in MI with the help of sensory inputs from tangible objects (i.e., hard and rough balls), with a focus on poorly performing users. The proposed method is a hybrid of training and imagery, combining motor execution and somatosensory sensation from a ball-type stimulus. Fourteen healthy participants participated in the somatosensory-motor imagery (SMI) experiments (within-subject design) involving EEG data classification with a three-class system (signaling with left hand, right hand, or right foot). In the scenario of controlling a remote robot to move it to the target point, the participants performed MI when faced with a three-way intersection. The SMI condition had a better classification performance than did the MI condition, achieving a 68.88% classification performance averaged over all participants, which was 6.59% larger than that in the MI condition (p < 0.05). In poor performers, the classification performance in SMI was 10.73% larger than in the MI condition (62.18% vs. 51.45%). However, good performers showed a slight performance decrement (0.86%) in the SMI condition compared to the MI condition (80.93% vs. 81.79%). Combining the brain signals from the motor and somatosensory cortex, the proposed hybrid MI-BCI system demonstrated improved classification performance, this phenomenon was predominant in poor performers (eight out of nine subjects). Hybrid MI-BCI systems may significantly contribute to reducing the proportion of BCI-inefficiency users and closing the performance gap with other BCI systems.
Assistant devices such as meal-assist robots aid individuals with disabilities and support the elderly in performing daily activities. However, existing meal-assist robots are inconvenient to operate due to non-intuitive user interfaces, requiring additional time and effort. Thus, we developed a hybrid brain–computer interface-based meal-assist robot system following three features that can be measured using scalp electrodes for electroencephalography. The following three procedures comprise a single meal cycle. (1) Triple eye-blinks (EBs) from the prefrontal channel were treated as activation for initiating the cycle. (2) Steady-state visual evoked potentials (SSVEPs) from occipital channels were used to select the food per the user’s intention. (3) Electromyograms (EMGs) were recorded from temporal channels as the users chewed the food to mark the end of a cycle and indicate readiness for starting the following meal. The accuracy, information transfer rate, and false positive rate during experiments on five subjects were as follows: accuracy (EBs/SSVEPs/EMGs) (%): (94.67/83.33/97.33); FPR (EBs/EMGs) (times/min): (0.11/0.08); ITR (SSVEPs) (bit/min): 20.41. These results revealed the feasibility of this assistive system. The proposed system allows users to eat on their own more naturally. Furthermore, it can increase the self-esteem of disabled and elderly peeople and enhance their quality of life.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.