2015
DOI: 10.1016/j.procs.2015.04.138
|View full text |Cite
|
Sign up to set email alerts
|

An Approach to EEG Based Emotion Recognition and Classification Using Kernel Density Estimation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
10
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 34 publications
(10 citation statements)
references
References 7 publications
0
10
0
Order By: Relevance
“…In the machine learning algorithm researchers believe that suitable feature extraction is the key to make an efficient predictive model. There are several types of features based on frequency domain, Time domain and frequency time domain for the EEG signal diagnosis and analysis [23]. Feature Classification-Machine learning is a subject of artificial intelligence.…”
Section: Feature Extraction and Classificationmentioning
confidence: 99%
See 1 more Smart Citation
“…In the machine learning algorithm researchers believe that suitable feature extraction is the key to make an efficient predictive model. There are several types of features based on frequency domain, Time domain and frequency time domain for the EEG signal diagnosis and analysis [23]. Feature Classification-Machine learning is a subject of artificial intelligence.…”
Section: Feature Extraction and Classificationmentioning
confidence: 99%
“…Using these technologies diagnosis of neuropsychiatric and neurological disorder such as Alzheimer, epilepsy and depressive disorder will be done [31]. Nowadays, researcher has shows extensive interest to make human emotion interaction with machine by proper brain computer interface [23].Literature survey covered recent methods used in physiological signal based emotion recognition; it will help for researcher working in this field. The remaining portion of this paper is organized as follows: Emotion is described in Section II.…”
mentioning
confidence: 99%
“…Many physiological modalities and features have been evaluated for ER, namely Electroencephalography (EEG) [ 28 , 29 , 30 ], Electrocardiography (ECG) [ 31 , 32 , 33 ], Electrodermal Activity (EDA) [ 34 , 35 , 36 ], Respiration (RESP) [ 26 ], Blood Volume Pulse (BVP) [ 26 , 35 ] and Temperature (TEMP) [ 26 ]. Multi-modal approaches have prevailed; however, there is still no clear evidence of which feature combinations and physiological signals are the most relevant.…”
Section: State Of the Artmentioning
confidence: 99%
“…Several methods and techniques can be applied to perform emotion recognition through the use of a couple of hardware devices and software such as: analysis of emotional properties based on two physiological data such as, ECG and EEG [3]; unified system for efficient discrimination of positive and negative emotions based on EEG data [4]; automatic recognizer of the facial expression around the eyes and forehead based on Electrooculography (EOG) data giving support to emotion recognition task [5]; use of GSR and ECG data to develop a study to examine the effectiveness of Matching Pursuit (MP) algorithm in emotion recognition, using mainly PCA to reduce the features dimensionality and Probabilistic Neural Network (PNN) as the recognition technique [6]; emotion recognition system based on physiological data using ECG and respiration (RSP) data, recorded simultaneously by a physiological monitoring device based on wearable sensors [7]; emotions recognition using EEG data and also performed an analyze about the impact of positive and negative emotions using SVM and RBF as the recognition methods [8]; new approach to emotion recognition based on EEG and classification method using Artificial Neural Networks (ANN) with features analysis based on Kernel Density Estimation (KDE) [9]; an application that stores several psychophysiological data based on HR, ECG, SpO2 and GSR, that were acquired while the users watched advertisements about smoking campaigns [10]; experiments based on flight simulator to developed a multimodal sensing architecture to recognize emotions using three different techniques for biosignal acquisitions [11]; multimodal sensing system to identify emotions using different acquisition techniques, based on photo presentation methodology [12]; real-time user interface with emotion recognition that depends on the need for skill development to support a change in the interface paradigm to one that is more human centered [13]; recognize emotions through psychophysiological sensing using a multiple-fusion-layer based on ensemble classifier of stacked auto encoder (MESAE) [14].…”
Section: Introductionmentioning
confidence: 99%