A light-weight, wearable, wireless gaze tracker with integrated selection command source for human-computer interaction is introduced. The prototype system combines head-mounted, video-based gaze tracking with capacitive facial movement detection that enable multimodal interaction by gaze pointing and making selections with facial gestures. The system is targeted mainly to disabled people with limited mobility over their hands. The hardware was made wireless to remove the need to take off the device when moving away from the computer, and to allow future use in more mobile contexts. The algorithms responsible for determining the eye and head orientations to map gaze direction to on-screen coordinates are presented together with the one to detect movements from the measured capacitance signal. Point-and-click experiments were conducted to assess the performance of the multimodal system. The results show decent performance in laboratory and office conditions. The overall point-and-click accuracy in the multimodal experiments is comparable to the errors in previous research on head-mounted, single modality gaze tracking that does not compensate for changes in head orientation.
A capacitive facial movement detection method designed for human-computer interaction is presented. Some point-and-click interfaces use facial electromyography for clicking. The presented method provides a contactless alternative. Electrodes with no galvanic coupling to the face are used to form electric fields. Changes in the electric fields due to facial movements are detected by measuring capacitances between the electrodes. A prototype device for measuring a capacitance signal affected by frowning and lifting eyebrows was constructed. A commercial integrated circuit for capacitive touch sensors is used in the measurement. The applied movement detection algorithm uses an adaptive approach to provide operation capability in noisy and dynamic environments. Experimentation with 10 test subjects proved that, under controlled circumstances, the movements are detected with good efficiency, but characterizing the movements into frowns and eyebrow lifts is more problematic. Integration with a two-dimensional (2D) pointing solution and further experiments are still required.
The goal of this research was to investigate neural network-based methods to be applied in the processing of biomedical signals. We developed a neural network-based method for the detection of voluntarily produced changes in facial muscle action potentials. Electromyographic signals were recorded from the
corrugator supercilii
and
zygomaticus major
facial muscles. The facial muscle action potentials of thirty subjects were measured while they performed a series of voluntary contractions of these muscles. Wavelet denoising or digital bandpass filtering was applied to the preprocessing of the signals. A neural network was exploited for an offline classification of various phases of these signals. The results show that the neural network-based technique developed functioned very well, producing a reliable recognition accuracy of 96 to 99%. Because of these promising results, we will proceed in the development of this method for real-time applications that benefit from the analysis of electromyographic signals.
Reliable detection of onset and termination of muscle contraction is an essential task in the analysis of surface electromyographic signals. An event detection method that can be used for sequential detection of both onset and termination of muscle contraction is described. The method builds on the techniques of envelope detection, two-point backward difference and threshold based decision making. Therefore, fast conventional digital signal processing techniques can be used in its implementation. Because the method is computationally efficient, it can be employed in both real-time and non-real-time applications. This text discusses the architecture of the method, considers the practical aspects of its implementation, analyses its computational complexity and evaluates its performance on the grounds of experimental results.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.