The results reveal certain aspects that may affect the success of speech imagery classification from EEG signals, such as sound, meaning and word complexity. This can potentially extend the capability of utilizing speech imagery in future BCI applications. The dataset of speech imagery collected from total 15 subjects is also published.
In this paper, robustness analysis of constrained Image Based Visual Servoing based on Nonlinear Model Predictive Control (NMPC) is presented. It is known, that real applications such an aerial or a fast underwater robotic systems, suffer from the presence of external disturbances. These kinds of disturbances are inevitable in the physical systems, so it is of great interest to employ robust controllers. Therefore, a rigorous robustness analysis should be conducted. In this paper, the Image Based Visual Servoing system under the MPC framework is proven to be Input-to-State Stable (ISS) and a permissible upper bound of the disturbances is provided. Finally, the validity of the theoretic results is illustrated through a simulated example.
This paper proposes a novel adaptive online-feedback methodology for Brain Computer Interfaces (BCI). The method uses ElectroEncephaloGraphic (EEG) signals and combines motor with speech imagery to allow for tasks that involve multiple degrees of freedom (DoF). The main approach utilizes the covariance matrix descriptor as feature, and the Relevance Vector Machines (RVM) classifier. The novel contributions include, (1) a new method to select representative data to update the RVM model, and (2) an online classifier which is an adaptively-weighted mixture of RVM models to account for the users’ exploration and exploitation processes during the learning phase. Instead of evaluating the subjects’ performance solely based on the conventional metric of accuracy, we analyze their skill’s improvement based on 3 other criteria, namely the confusion matrix’s quality, the separability of the data, and their instability. After collecting calibration data for 8 minutes in the first run, 8 participants were able to control the system while receiving visual feedback in the subsequent runs. We observed significant improvement in all subjects, including two of them who fell into the BCI illiteracy category. Our proposed BCI system complements the existing approaches in several aspects. First, the co-adaptation paradigm not only adapts the classifiers, but also allows the users to actively discover their own way to use the BCI through their exploration and exploitation processes. Furthermore, the auto-calibrating system can be used immediately with a minimal calibration time. Finally, this is the first work to combine motor and speech imagery in an online feedback experiment to provide multiple DoF for BCI control applications.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.