Biometric signals have been extensively used for user identification and authentication due to their inherent characteristics that are unique to each person. The variation exhibited between the brain signals (EEG) of different people makes such signals especially suitable for biometric user identification. However, the characteristics of these signals are also influenced by the user's current condition, including his/her affective state. In this paper, we analyze the significance of the affect-related component of brain signals within the subject identification context. Consistent results are obtained across three different public datasets, suggesting that the dominant component of the signal is subject-related, but the affective state also has a contribution that affects identification accuracy. Results show that identification accuracy increases when the system has been trained with EEG recordings that refer to similar affective states as the sample that is to be identified. This improvement holds independently of the features and classification algorithm used, and it is generally above 10% under a rigorous setting, when the training and validation datasets do not share data from the same recording days. This finding emphasizes the potential benefits of considering affective information in applications that require subject identification, such as user authentication.
ernuEqonz¡ lezD lo nd utsiginnisD tmos nd erevlilloErerr¡ ezD wiguel nd mznD xeem @PHPIA 9fihX e new dtset for iiqEsed iometrisF9D siii snternet of things journlF F
Security systems are starting to meet new technologies and new machine learning techniques, and a variety of methods to identify individuals from physiological signals have been developed. In this paper, we present ES1D, a deep learning approach to identify subjects from electroencephalogram (EEG) signals captured by using a low cost device. The system consists of a Convolutional Neural Network (CNN), which is fed with the power spectral density of different EEG recordings belonging to different individuals. The network is trained for a period of one million iterations, in order to learn features related to local patterns in the spectral domain of the original signal. The performance of the system is evaluated against other traditional classification-based methods that use prior-knowledge-defined features. Results show that the system significantly outperforms other examined approaches, with 94% accuracy at discerning an individual in between a group of 23 different individuals.
The proliferation of multimedia technology and its wide adoption by users has created the need for more effective metrics for Quality of Experience (QoE). Objective video quality metrics usually under-perform in terms of perceptual quality, thus evaluation is usually performed offline by people, an arduous and time consuming task that is also affected by external conditions and by user preferences. The use of physiological signals, recorded from users exposed to multimedia stimuli, has the potential to offer a more robust and unbiased method for evaluating perceptual quality. In this work, we propose the evaluation of the perceptual quality of video by means of cerebral (Electroencephalography -EEG) and peripheral (Electrocardiography -ECG and Electromyography -EMG) physiological signals. A machine learning approach is employed in order to map features extracted from these signals to a subjective video quality scale. Five 4K video sequences were encoded at different quality levels using the state-of-the-art HEVC codec and their quality was evaluated by real users while recording their physiological signals. The quality levels decided by the proposed model were then evaluated against the user-provided MOSs and the results demonstrated the potential of the proposed method for accurate perceptual video quality evaluation.
Electroencephalography (EEG) signals provide a representation of the brain's activity patterns and have been recently exploited for user identification and authentication due to their uniqueness and their robustness to interception and artificial replication. Nevertheless, such signals are commonly affected by the individual's emotional state. In this work, we examine the use of images as stimulus for acquiring EEG signals and study whether the use of images that evoke similar emotional responses leads to higher identification accuracy compared to images that evoke different emotional responses. Results show that identification accuracy increases when the system is trained with EEG recordings that refer to similar emotional states as the EEG recordings that are used for identification, demonstrating an up to 5.3% increase on identification accuracy compared to when recordings referring to different emotional states are used. Furthermore, this improvement holds independently of the features and classification algorithms employed.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.