2020
DOI: 10.1016/j.imu.2020.100372
|View full text |Cite
|
Sign up to set email alerts
|

Development of a Real-Time Emotion Recognition System Using Facial Expressions and EEG based on machine learning and deep neural network methods

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

1
47
0
1

Year Published

2020
2020
2024
2024

Publication Types

Select...
6
3

Relationship

0
9

Authors

Journals

citations
Cited by 196 publications
(69 citation statements)
references
References 10 publications
1
47
0
1
Order By: Relevance
“…Understanding emotional signals in everyday life environments becomes an important aspect that influences people's communication through verbal and nonverbal behavior [ 40 ]. One such example of emotional signals is expressed through facial expression which is known to be one of the most immediate means of human beings to communicate their emotions and intentions [ 41 ]. With the advancement of technologies in brain-computer interface and neuroimaging, it is now feasible to capture the brainwave signals nonintrusively and to measure or control the motions of devices virtually [ 42 ] or physically such as wheelchairs [ 43 ], mobile phone interfacing [ 44 ], or prosthetic arms [ 45 , 46 ] with the use of a wearable EEG headset.…”
Section: Emotionsmentioning
confidence: 99%
“…Understanding emotional signals in everyday life environments becomes an important aspect that influences people's communication through verbal and nonverbal behavior [ 40 ]. One such example of emotional signals is expressed through facial expression which is known to be one of the most immediate means of human beings to communicate their emotions and intentions [ 41 ]. With the advancement of technologies in brain-computer interface and neuroimaging, it is now feasible to capture the brainwave signals nonintrusively and to measure or control the motions of devices virtually [ 42 ] or physically such as wheelchairs [ 43 ], mobile phone interfacing [ 44 ], or prosthetic arms [ 45 , 46 ] with the use of a wearable EEG headset.…”
Section: Emotionsmentioning
confidence: 99%
“…Proposed method for speech signal recognition system based on EMD and non-linear features: a simplified representation. b Detailed illustration of different processing steps involved in proposed SER system frequency scales[29]. Hence, predefined selection of any IMF component or frequency scale will lead to a loss of information.…”
mentioning
confidence: 99%
“…There is no standard number of AUs have been proposed yet. The total number of AUs and AUs’ locations is based on the application and its requirement [ 28 , 29 ]. The distance between the AUs is mostly referred to as a standard feature for facial expression recognition.…”
Section: Related Workmentioning
confidence: 99%
“…A novel vectorized emotion recognition model is proposed to identify three primary emotions: angry, happy, and neutral, using 70 facial vectors and a deep neural network (DNN), and achieved a mean accuracy of 84.33% [ 39 ]. In recent literature, the researchers have used spatial and temporal information from input video sequences to classify different facial expressions using CNN, Ensemble Multi-level CNN, and Long Short-term Memory (LSTM) [ 29 , 40 – 43 ]. Some of the common issues reported in the earlier works due to the lack of samples or data sets, low accuracy in classifying facial expressions, higher computational complexity (more memory and power required for processing the data), not suitable for real-time applications, and not user-friendly approach (restrictions in using the system for a variety of applications) [ 44 ].…”
Section: Related Workmentioning
confidence: 99%