Biometric authentication, namely using biometric features for authentication is gaining popularity in recent years as further modalities, such as fingerprint, iris, face, voice, gait, and others are exploited. We explore the effectiveness of three simple Electroencephalography (EEG) related biometric authentication tasks, namely resting, thinking about a picture, and moving a single finger. We present details of the data processing steps we exploit for authentication, including extracting features from the frequency power spectrum and MFCC, and training a multilayer perceptron classifier for authentication. For evaluation purposes, we record an EEG dataset of 27 test subjects. We use three setups, baseline, task-agnostic, and task-specific, to investigate whether person-specific features can be detected across different tasks for authentication. We further evaluate, whether different tasks can be distinguished. Our results suggest that tasks are distinguishable, as well as that our authentication approach can work both exploiting features from a specific, fixed, task as well as using features across different tasks.
Gaze gestures bear potential for user input with mobile devices, especially smart glasses, due to being always available and handsfree. So far, gaze gesture recognition approaches have utilized open-eye movements only and disregarded closed-eye movements. This paper is a first investigation of the feasibility of detecting and recognizing closedeye gaze gestures from close-up optical sources, e.g. eye-facing cameras embedded in smart glasses. We propose four different closed-eye gaze gesture protocols, which extend the alphabet of existing open-eye gaze gesture approaches. We further propose a methodology for detecting and extracting the corresponding closed-eye movements with full optical flow, time series processing, and machine learning. In the evaluation of the four protocols we find closed-eye gaze gestures to be detected 82.8%-91.6% of the time, and extracted gestures to be recognized correctly with an accuracy of 92.9%-99.2%.
Ambient intelligence demands collaboration schemes for distributed constrained devices which are not only highly energy efficient with respect to distributed sensing, processing and communication, but which also respect data privacy. Traditional algorithms for distributed processing suffer in Ambient intelligence domains either from limited data privacy, or from their excessive processing demands for constrained distributed devices. In this paper, we present Camouflage learning, a distributed machine learning scheme that obscures the trained model via probabilistic collaboration using physical-layer computation offloading and demonstrate the feasibility of the approach on backscatter communication prototypes and in comparison with federated learning, a popular distributed learning scheme. We show that Camouflage learning is more energy efficient than traditional schemes and that it requires less communication overhead while reducing the computation load through physical-layer computation offloading. The scheme is synchronization-agnostic and thus appropriate for sharply constrained, synchronization-incapable devices. We demonstrate model training and inference on four distinct datasets and investigate the performance of the scheme with respect to communication range, impact of challenging communication environments, power consumption, and the backscatter hardware prototype.
The effect that advances in voice interface technologies have on privacy has not yet received the attention it deserves. Systems in which multiple devices collaborate to provide a unified user-interface amplify those worries about privacy. We discuss ethical implications of voice enabled devices on privacy in typical scenarios at home, office, in a car and in the public. From our findings, it follows that the reach of voice can be exploited as a feature to intuitively define the extent of privacy. In particular, the acoustic reach of speech signals can serve as a feature for designing privacy-gentle voice user-interfaces which are intuitive to use. We argue that this approach poses reasonable technological requirements and establishes a natural experience of privacy which confirms intuitive perception.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.