2018
DOI: 10.1109/tcsvt.2017.2769096
|View full text |Cite
|
Sign up to set email alerts
|

Driver Facial Landmark Detection in Real Driving Situations

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
12
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
4
4

Relationship

1
7

Authors

Journals

citations
Cited by 25 publications
(12 citation statements)
references
References 44 publications
0
12
0
Order By: Relevance
“…FL models are trained model from the appearance and shape variations from a coarse initialization. Then, the initial shape is moved to a better position step-by-step until convergence [ 16 ]. Basic emotions (BEs) are seven basic human emotions: happiness, surprise, anger, sadness, fear, disgust, and neutral, as shown in Figure 3 a.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…FL models are trained model from the appearance and shape variations from a coarse initialization. Then, the initial shape is moved to a better position step-by-step until convergence [ 16 ]. Basic emotions (BEs) are seven basic human emotions: happiness, surprise, anger, sadness, fear, disgust, and neutral, as shown in Figure 3 a.…”
Section: Introductionmentioning
confidence: 99%
“…FL models are trained model from the appearance and shape variations from a coarse initialization. Then, the initial shape is moved to a better position step-by-step until convergence [ 16 ].…”
Section: Introductionmentioning
confidence: 99%
“…The geometric features generally include the relationship between facial components. Facial landmark points are representative examples of geometric features [2,30,31]. The global facial region features or different types of information on facial regions are extracted as appearance features [20,36].…”
Section: Facial Emotion Recognitionmentioning
confidence: 99%
“…The most important issue in the speech-emotion recognition system is the effective parallel use of the extraction of proper speech-signal features and an appropriate classification engine. These features include pitch, formant, and energy features [23,33,41]. In addition, the mel-frequency cepstrum coefficients (MFCC) feature is representatively used in many studies for speech-emotion recognition [26,37,39].…”
Section: Introductionmentioning
confidence: 99%
“…However, if a decision tree is not removed, the weight of each tree is generated according to its accuracy, estimated based on OOB. Because OOB data are not involved in the building of the tree, the weight learning from this dataset can avoid over-fitting [45]. We can repeat the above procedure to generate the T decision trees and the accuracy values of the remaining trees are utilized as the weights wt.…”
Section: Facial Expression Recognition Approachmentioning
confidence: 99%