Emotion recognition based on electroencephalography (EEG) has received attention as a way to implement human-centric services. However, there is still much room for improvement, particularly in terms of the recognition accuracy. In this paper, we propose a novel deep learning approach using convolutional neural networks (CNNs) for EEG-based emotion recognition. In particular, we employ brain connectivity features that have not been used with deep learning models in previous studies, which can account for synchronous activations of different brain regions. In addition, we develop a method to effectively capture asymmetric brain activity patterns that are important for emotion recognition. Experimental results confirm the effectiveness of our approach.
This paper proposes a novel graph signal-based deep learning method for electroencephalography (EEG) and its application to EEG-based video identification. We present new methods to effectively represent EEG data as signals on graphs, and learn them using graph convolutional neural networks. Experimental results for video identification using EEG responses obtained while watching videos show the effectiveness of the proposed approach in comparison to existing methods. Effective schemes for graph signal representation of EEG are also discussed.
High dynamic range (HDR) imaging has been attracting much attention as a technology that can provide immersive experience. Its ultimate goal is to provide better quality of experience (QoE) via enhanced contrast. In this paper, we analyze perceptual experience of tone-mapped HDR videos both explicitly by conducting a subjective questionnaire assessment and implicitly by using EEG and peripheral physiological signals. From the results of the subjective assessment, it is revealed that tone-mapped HDR videos are more interesting and more natural, and give better quality than low dynamic range (LDR) videos. Physiological signals were recorded during watching tonemapped HDR and LDR videos, and classification systems are constructed to explore perceptual difference captured by the physiological signals. Significant difference in the physiological signals is observed between tone-mapped HDR and LDR videos in the classification under both a subject-dependent and a subjectindependent scenarios. Also, significant difference in the signals between high versus low perceived contrast and overall quality is detected via classification under the subject-dependent scenario. Moreover, it is shown that features extracted from the gamma frequency band are effective for classification.Index Terms-High dynamic range video, electroencephalography (EEG), physiological signal, quality of experience (QoE).
The exponential growth of popularity of multimedia has led needs for user-centric adaptive applications that manage multimedia content more effectively. Implicit analysis, which examines users' perceptual experience of multimedia by monitoring physiological or behavioral cues, has potential to satisfy such demands. Particularly, physiological signals categorized into cerebral physiological signals (electroencephalography, functional magnetic resonance imaging, and functional nearinfrared spectroscopy) and peripheral physiological signals (heart rate, respiration, skin temperature, etc.) have recently received attention along with notable development of wearable physiological sensors. In this paper, we review existing studies on physiological signal analysis exploring perceptual experience of multimedia. Furthermore, we discuss current trends and challenges.Index Terms-physiological signal, perceptual experience, implicit analysis, multimedia 1 Overviews of implicit measurement techniques using behavioral cues can be found in [6], [7].This is the author's version of an article that has been published in this journal. Changes were made to this version by the publisher prior to publication.The final version of record is available at http://dx.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.