We usually look at an object when we are going to manipulate it. Thus, eye tracking can be used to communicate intended actions. An effective human-machine interface, however, should be able to differentiate intentional and spontaneous eye movements. We report an electroencephalogram (EEG) marker that differentiates gaze fixations used for control from spontaneous fixations involved in visual exploration. Eight healthy participants played a game with their eye movements only. Their gaze-synchronized EEG data (fixation-related potentials, FRPs) were collected during game's control-on and control-off conditions. A slow negative wave with a maximum in the parietooccipital region was present in each participant's averaged FRPs in the control-on conditions and was absent or had much lower amplitude in the control-off condition. This wave was similar but not identical to stimulus-preceding negativity, a slow negative wave that can be observed during feedback expectation. Classification of intentional vs. spontaneous fixations was based on amplitude features from 13 EEG channels using 300 ms length segments free from electrooculogram contamination (200–500 ms relative to the fixation onset). For the first fixations in the fixation triplets required to make moves in the game, classified against control-off data, a committee of greedy classifiers provided 0.90 ± 0.07 specificity and 0.38 ± 0.14 sensitivity. Similar (slightly lower) results were obtained for the shrinkage Linear Discriminate Analysis (LDA) classifier. The second and third fixations in the triplets were classified at lower rate. We expect that, with improved feature sets and classifiers, a hybrid dwell-based Eye-Brain-Computer Interface (EBCI) can be built using the FRP difference between the intended and spontaneous fixations. If this direction of BCI development will be successful, such a multimodal interface may improve the fluency of interaction and can possibly become the basis for a new input device for paralyzed and healthy users, the EBCI “Wish Mouse.”
We propose a novel way of robotic device control with communicative eye movements that could possibly help to solve the problem of false activations during the gaze control, known as the Midas touch problem. The proposed approach can be considered as explicitly based on communication between a human operator and a robot. Specifically, we employed gaze patterns that are characteristic for "joint attention" type of communication between two persons. "Joint attention" gaze patterns are automatized and able to convey information about object location even under a high cognitive load. Therefore, we assumed that they may make robot control with gaze more stable. In a study with 28 healthy participants who were naive to this approach most of them easily acquired robot control with "joint attention" gaze patterns. The study did not reveal higher preference for communicative type of control, possibly because the participants did not practice before the tests. We discuss potential benefits of the new approach that can be tested in future studies.
Улучшение работы интерфейса глаз-мозг-компьютер при использовании частотных компонентов электроэнцефалограммыImproving eye-brain-computer interface performance by using electroencephalogram frequency components Eye-brain-computer interfaces (EBCIs) could combine the advantages of eye tracking systems used for operating technical devices and brain-computer interfaces. Such systems are intended for both patients with various motor impairments and healthy individuals. The effectiveness of EBCIs is largely dependent on their ability to detect the user's intent to give a command on the encephalogram (EEG) recorded during gaze fixation, that is, just within hundreds of milliseconds. These strict requirements necessitate a full use of data contained in EEG for more accurate classification of gaze fixations as spontaneous and "control". This work describes our attempt to use for classification not only amplitude statistical features, but also wavelet features specific to oscillatory EEG components within the interval of 50-500 ms from gaze fixation onset. Integral index of classification accuracy AUC significantly depended on the feature set, reaching the highest value (0.75, average over the group of 8 participants) for the combined amplitude and wavelet set. We believe that further improvement of this method will facilitate the practical application of EBCIs.Интерфейсы глаз-мозг-компьютер (ИГМК) могли бы совместить в себе достоинства айтрекинговых систем управле-ния техническими устройствами и интерфейсов мозг-компьютер. Такие системы предназначены как для пациентов с различными моторными нарушениями, так и здоровых людей. Эффективность ИГМК во многом определяется воз-можностью распознать намерение пользователя отдать команду по электроэнцефалограмме (ЭЭГ), регистрируемой во время фиксации взгляда, т. е. в течение всего сотен миллисекунд. Эти жесткие требования диктуют необходи-мость добиваться как можно более полного использования заключенной в ЭЭГ информации для повышения точнос-ти классификации фиксаций взгляда на «управляющие» и спонтанные. В настоящей работе предприняли попытку использовать для классификации не только амплитудные статистические признаки, но также вейвлетные признаки, характеризующие осцилляторные компоненты ЭЭГ в интервале 50…500 мс относительно начала фиксации взгля-да. Значения интегрального показателя точности классификации AUC при этом значимо выросли и составили 0,75 в среднем по группе из 8 человек. Предполагается, что дальнейшее совершенствование методики позволит превра-тить ИГМК в практически полезную технологию.Ключевые слова: интерфейс мозг-компьютер, интерфейс глаз-мозг-компьютер, электроэнцефалограмма, ЭЭГ, управление с помощью взгляда, управляющая фиксация взгляда, айтрекинг, видеоокулография, классификация, вейвлеты Keywords: brain-computer interface, eye-brain-computer interface, electroencephalogram, EEG, gaze-based control, control gaze fixation, eye tracking, video-oculography, classification, wavelets Финансирование: работа выполнена при частичной поддержке Российского научног...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.