2021
DOI: 10.3390/s21062166
|View full text |Cite
|
Sign up to set email alerts
|

DRER: Deep Learning–Based Driver’s Real Emotion Recognizer

Abstract: In intelligent vehicles, it is essential to monitor the driver’s condition; however, recognizing the driver’s emotional state is one of the most challenging and important tasks. Most previous studies focused on facial expression recognition to monitor the driver’s emotional state. However, while driving, many factors are preventing the drivers from revealing the emotions on their faces. To address this problem, we propose a deep learning-based driver’s real emotion recognizer (DRER), which is a deep learning-b… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

0
20
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
7
1

Relationship

1
7

Authors

Journals

citations
Cited by 38 publications
(20 citation statements)
references
References 61 publications
0
20
0
Order By: Relevance
“…Although these methods perform well with large-scale data in the wild, they encounter limitations when used in real time. Except in cases such as posterior analysis with recorded video, most real-world applications require future decisions only with past data in real time, such as facial expression recognition for interview assistant systems [21] or driver monitoring systems [19].…”
Section: Introductionmentioning
confidence: 99%
See 2 more Smart Citations
“…Although these methods perform well with large-scale data in the wild, they encounter limitations when used in real time. Except in cases such as posterior analysis with recorded video, most real-world applications require future decisions only with past data in real time, such as facial expression recognition for interview assistant systems [21] or driver monitoring systems [19].…”
Section: Introductionmentioning
confidence: 99%
“…CAPNet consists of a modular architecture divided into a feature extractor and a causality extractor, which allows one to learn causality well from past facial images. The feature extractor is based on the facial expression recognition (FER) model proposed by Oh et al [19], which was pretrained using the AffectNet dataset [17]. We fine-tuned this FER model with a pair of single images and corresponding labels of the Aff-Wild2 dataset [10] and used the CNN architecture of the fine-tuned FER (FER-Tuned) model as the feature extractor of CAPNet.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Driver's expressions and eye movements are recorded by near-infrared (NIR) camera sensors, and then aggressive driving behaviour is classified by a CNN. The method proposed in [21] combines both physiological (electrodermal activity) and behavioural (facial) measurements, and fuses together different data types in order to build a Sensor Fusion Emotion Recognition (SFER) system, improving the classification performance.…”
Section: Introductionmentioning
confidence: 99%
“…In the second class, the neuro-fuzzy controllers and machine learning approaches have been developed [13][14][15][16]. For Example, the improvement of backstepping control method by the use of FLSs is investigated in [17], and the time convergence is analyzed.…”
Section: Introduction 1literature Reviewmentioning
confidence: 99%