Electrocorticography (ECoG) has been demonstrated as a promising neural signal source for developing brain-machine interfaces (BMIs). However, many concerns about the disadvantages brought by large craniotomy for implanting the ECoG grid limit the clinical translation of ECoG-based BMIs. In this study, we collected clinical ECoG signals from the sensorimotor cortex of three epileptic participants when they performed hand gestures. The ECoG power spectrum in hybrid frequency bands was extracted to build a synchronous real-time BMI system. High decoding accuracy of the three gestures was achieved in both offline analysis (85.7%, 84.5%, and 69.7%) and online tests (80% and 82%, tested on two participants only). We found that the decoding performance was maintained even with a subset of channels selected by a greedy algorithm. More importantly, these selected channels were mostly distributed along the central sulcus and clustered in the area of 3 interelectrode squares. Our findings of the reduced and clustered distribution of ECoG channels further supported the feasibility of clinically implementing the ECoG-based BMI system for the control of hand gestures.
Recent face recognition techniques have achieved remarkable successes in fast face retrieval on huge image datasets. But the performance is still limited when large illumination, pose, and facial expression variations are presented. On contrary, human brain has powerful cognitive capability to recognize faces and demonstrates robustness across viewpoints, lighting conditions, even in the presence of partial occlusion. This paper proposes a closed-loop face retrieval system that combines the state-of-the-art face recognition method with powerful cognitive function of human brain illustrated in electroencephalography signals. The system starts with a random face image and outputs the ranking of the all images in database according to their similarity to target individual. At each iteration, the single trial event related potentials (ERP) detector scores the user's interest in rapid serial visual presentation paradigm, where the presented images are selected from the computer face recognition module. When the system converges, the ERP detector further refines the lower ranking to achieve better performance. Totally 10 subjects participated in the experiment exploring a database containing 1854 images of 46 celebrities. Our approach outperforms existing Manuscript contribute equally to this paper. Yueming Wang serves as the corresponding author. methods with better average precision, indicating human cognitive ability complements computer face recognition and contributes to better face retrieval.Index Terms-brain-computer interface (BCI); face retrieval; closed-loop system; electroencephalography (EEG)
The event-related potential (ERP) is the brain response measured in electroencephalography (EEG), which reflects the process of human cognitive activity. ERP has been introduced into brain computer interfaces (BCIs) to communicate the computer with the subject's intention. Due to the low signal-to-noise ratio of EEG, most ERP studies are based on grand-averaging over many trials. Recently single-trial ERP detection attracts more attention, which enables real time processing tasks as rapid face identification. All the targets needed to be retrieved may appear only once, and there is no knowledge of target label for averaging. More interestingly, how the features contribute temporally and spatially to single-trial ERP detection has not been fully investigated. In this paper, we propose to implement a local-learning-based (LLB) feature extraction method to investigate the importance of spatial-temporal components of ERP in a task of rapid face identification using single-trial detection. Comparing to previous methods, LLB method preserves the nonlinear structure of EEG signal distribution, and analyze the importance of original spatial-temporal components via optimization in feature space. As a data-driven methods, the weighting of the spatial-temporal component does not depend on the ERP detection method. The importance weights are optimized by making the targets more different from non-targets in feature space, and regularization penalty is introduced in optimization for sparse weights. This spatial-temporal feature extraction method is evaluated on the EEG data of 15 participants in performing a face identification task using rapid serial visual presentation paradigm. Comparing with other methods, the proposed spatial-temporal analysis method uses sparser (only 10% of the total) features, and could achieve comparable performance (98%) of single-trial ERP detection as the whole features across different detection methods. The interesting finding is that the N250 is the earliest temporal component that contributes to single-trial ERP detection in face identification. And the importance of N250 components is more laterally distributed toward the left hemisphere. We show that using only the left N250 component over-performs the right N250 in the face identification task using single-trial ERP detection. The finding is also important in building a fast and efficient (fewer electrodes) BCI system for rapid face identification.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.