Brain-computer interfaces (BCIs) require realtime feature extraction for translating input EEG signals recorded from a subject into an output command or decision. Owing to the inherent difficulties in EEG signal processing and neural decoding, many of the feature extraction algorithms are complex and computationally demanding. Presently, software does exist to perform real-time feature extraction and classification of EEG signals. However, the requirement of a personal computer is a major obstacle in bringing these technologies to the home and mobile user affording ease of use. We present the FPGA design and novel architecture of a generalized platform that provides a set of predefined features and preprocessing steps that can be configured by a user for BCI applications. The preprocessing steps include power line noise cancellation and baseline removal while the feature set includes a combination of linear and nonlinear, univariate and bivariate measures commonly utilized in BCIs. We provide a comparison of our results with software and also validate the platform by implementing a seizure detection algorithm on a standard dataset and obtained a classification accuracy of over 96%. A gradual transition of BCI systems to hardware would prove beneficial in terms of compactness, power consumption and much faster response to stimuli.
We propose a biologically inspired model that enables a humanoid robot to learn how to track its end effector by integrating visual and proprioceptive cues as it interacts with the environment. A key novel feature of this model is the incorporation of sensorimotor prediction, where the robot predicts the sensory consequences of its current body motion as measured by proprioceptive feedback. The robot develops the ability to perform smooth pursuit-like eye movements to track its hand, both in the presence and absence of visual input, and to track exteroceptive visual motions. Our framework makes a number of advances over past work. First, our model does not require a fiducial marker to indicate the robot hand explicitly. Second, it does not require the forward kinematics of the robot arm to be known. Third, it does not depend upon pre-defined visual feature descriptors. These are learned during interaction with the environment. We demonstrate that the use of prediction in multisensory integration enables the agent to incorporate the information from proprioceptive and visual cues better. The proposed model has properties that are qualitatively similar to the characteristics of human eye-hand coordination.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.