This paper presents a novel brain-computer interface (BCI) based on motion-onset visual evoked potentials (mVEPs). mVEP has never been used in BCI research, but has been widely studied in basic research. For the BCI application, the brief motion of objects embedded into onscreen virtual buttons is used to evoke mVEP that is time locked to the onset of motion. EEG data registered from 15 subjects are used to investigate the spatio-temporal pattern of mVEP in this paradigm. N2 and P2 components, with distinct temporo-occipital and parietal topography, respectively, are selected as the salient features of the brain response to the attended target that the subject selects by gazing at it. The computer determines the attended target by finding which button elicited prominent N2/P2 components. Besides a simple feature extraction of N2/P2 area calculation, the stepwise linear discriminant analysis is adopted to assess the target detection accuracy of a five-class BCI. A mean accuracy of 98% is achieved when ten trials data are averaged. Even with only three trials, the accuracy remains above 90%, suggesting that the proposed mVEP-based BCI could achieve a high information transfer rate in online implementation.
Recent progress in smart surfaces with responsive wettability upon external stimuli is reviewed and some of the barriers and potentially promising breakthroughs in this field are also briefly discussed.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.