Objectives: Brain-computer interfaces (BCIs) are no longer only used by healthy participants under controlled conditions in laboratory environments, but also by patients and end-users, controlling applications in their homes or clinics, without the BCI experts around. But are the technology and the field mature enough for this? Especially the successful operation of applications -like text entry systems or assistive mobility devices such as tele-presence robots-requires a good level of BCI control. How much training is needed to achieve such a level? Is it possible to train naïve end-users in 10 days to successfully control such applications?Materials and methods: In this work, we report our experiences of training 24 motor-disabled participants at rehabilitation clinics or at the end-users' homes, without BCI experts present. We also share the lessons that we have learned through transferring BCI technologies from the lab to the user's home or clinics.Results: The most important outcome is that fifty percent of the participants achieved good BCI performance and could successfully control the applications (tele-presence robot and text-entry system). In the case of the * Corresponding author. tele-presence robot the participants achieved an average performance ratio of 0.87 (max. 0.97) and for the text entry application a mean of 0.93 (max. 1.0). The lessons learned and the gathered user feedback range from pure BCI problems (technical and handling), to common communication issues among the different people involved, and issues encountered while controlling the applications. Conclusion:The points raised in this paper are very widely applicable and we anticipate that they might be faced similarly by other groups, if they move on to bringing the BCI technology to the end-user, to home environments and towards application prototype control.
The aim of this work is to present the development of a hybrid Brain-Computer Interface (hBCI) which combines existing input devices with a BCI. Thereby, the BCI should be available if the user wishes to extend the types of inputs available to an assistive technology system, but the user can also choose not to use the BCI at all; the BCI is active in the background. The hBCI might decide on the one hand which input channel(s) offer the most reliable signal(s) and switch between input channels to improve information transfer rate, usability, or other factors, or on the other hand fuse various input channels. One major goal therefore is to bring the BCI technology to a level where it can be used in a maximum number of scenarios in a simple way. To achieve this, it is of great importance that the hBCI is able to operate reliably for long periods, recognizing and adapting to changes as it does so. This goal is only possible if many different subsystems in the hBCI can work together. Since one research institute alone cannot provide such different functionality, collaboration between institutes is necessary. To allow for such a collaboration, a new concept and common software framework is introduced. It consists of four interfaces connecting the classical BCI modules: signal acquisition, preprocessing, feature extraction, classification, and the application. But it provides also the concept of fusion and shared control. In a proof of concept, the functionality of the proposed system was demonstrated.
Abstract-This paper discusses and evaluates the role of shared control approach in a BCI-based telepresence framework. Driving a mobile device by using human brain signals might improve the quality of life of people suffering from severely physical disabilities. By means of a bidirectional audio/video connection to a robot, the BCI user is able to interact actively with relatives and friends located in different rooms. However, the control of robots through an uncertain channel as a BCI may be complicated and exhaustive. Shared control can facilitate the operation of brain-controlled telepresence robots, as demonstrated by the experimental results reported here. In fact, it allows all subjects to complete a rather complex task, driving the robot in a natural environment along a path with several targets and obstacles, in shorter times and with less number of mental commands.
Abstract. Objective. While brain-computer interfaces (BCIs) for communication have reached considerable technical maturity, there is still a great need for state-of-the-art evaluation by end-users outside laboratory environments. To achieve this primary objective, it is necessary to augment a BCI with a series of components that allow end-users to type text effectively. Approach. This work presents the clinical evaluation of a motor imagery (MI) BCI text-speller, called BrainTree, by 6 severely disabled end-users and 10 able-bodied users. Additionally, we define a generic model of code-based BCI applications which serves as an analytical tool for evaluation and design. Main results. We show that all users achieved remarkable usability and efficiency outcomes in spelling. Furthermore, our model-based analysis highlights the added value of human-computer interaction (HCI) techniques and hybrid BCI error-handling mechanisms, and reveals the effects of BCI performances on usability and efficiency in code-based applications. Significance. This study demonstrates the usability potential of code-based MI spellers, with BrainTree being the first to be evaluated by a substantial number of end-users, establishing them as a viable, competitive alternative to other popular BCI spellers. Another major outcome of our modelbased analysis is the derivation of a 80% minimum command accuracy requirement for successful code-based application control, revising upwards previous estimates attempted in the literature.
Abstract-In this paper we show how healthy subjects can operate a non-invasive asynchronous BCI for controlling a FES neuroprosthesis and manipulate objects to carry out daily tasks in ecological conditions. Both, experienced and novel subjects proved to be able to deliver mental commands with high accuracy and speed. Our neuroprosthetic approach relies on a natural interaction paradigm, where subjects delivers congruent MI commands (i.e., they imagining a movement of the same hand they control through FES). Furthermore, we have tested our approach in a common daily task such as handwriting, which requires the user to split his/her attention to multitask between BCI control, reaching, and the primary handwriting task itself. Interestingly, the very low number of erroneous trials illustrates how during the experiments subjects were able to deliver commands just when they intended to do so. Similarly, the subjects could perform actions while delivering, or preparing to deliver, mental commands.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.