A complete signal processing strategy is presented to detect and precisely recognize tongue movement by monitoring changes in airflow that occur in the ear canal. Tongue movements within the human oral cavity create unique, subtle pressure signals in the ear that can be processed to produce command signals in response to that movement. The strategy developed for the human machine interface architecture includes energy-based signal detection and segmentation to extract ear pressure signals due to tongue movements, signal normalization to decrease the trial-to-trial variations in the signals, and pairwise cross-correlation signal averaging to obtain accurate estimates from ensembles of pressure signals. A new decision fusion classification algorithm is formulated to assign the pressure signals to their respective tongue-movement classes. The complete strategy of signal detection and segmentation, estimation, and classification is tested on 4 tongue movements of 4 subjects. Through extensive experiments, it is demonstrated that the ear pressure signals due to the tongue movements are distinct and that the 4 pressure signals can be classified with over 96% classification accuracies across the 4 subjects using the decision fusion classification algorithm.I.
.Abstract -We introduce an unobtrusive sensor-based control system for human-machine interface to control robotic and rehabilitative devices. The interface is capable of directing robotic or assist devices in response to tongue movement and/or speech without insertion of any device in the vicinity of the oral cavity. The interface is centered on the unique properties of the human ear as an acoustic output device.Our work has shown that various movements within the oral cavity create unique, traceable pressure changes in the human ear, which can be measured with a simple sensor (such as a microphone) and analysed to produce commands signals, which can in turn be used to control robotic devices.In this work, we present: 1) an analysis of the sensitivity of human ear canals as acoustic output device, 2) the design of a new sensor for monitoring airflow in the aural canal, 3) pattern recognition procedures for recognition of both speech and tongue movement by monitoring aural flow across several human test subjects, and 4) a conceptual design and simulation of the machine interface system.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.