2014
DOI: 10.1155/2014/678786
|View full text |Cite
|
Sign up to set email alerts
|

Driver’s Fatigue Detection Based on Yawning Extraction

Abstract: The increasing number of traffic accidents is principally caused by fatigue. In fact, the fatigue presents a real danger on road since it reduces driver capacity to react and analyze information. In this paper we propose an efficient and nonintrusive system for monitoring driver fatigue using yawning extraction. The proposed scheme uses face extraction based support vector machine (SVM) and a new approach for mouth detection, based on circular Hough transform (CHT), applied on mouth extracted regions. Our syst… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
33
0

Year Published

2015
2015
2022
2022

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 86 publications
(33 citation statements)
references
References 22 publications
0
33
0
Order By: Relevance
“…Behavioral-based vigilance estimation methods use features that contain mouth states [28], [29], eye states [30]- [32], facial expressions [33], and body posture [34] collected by a video device (e.g., camera, Infrared illuminator) to compute the detection accuracy rate. Alioua et al [28] proposed the circular Hough transform (CHT)-based approach using mouth state feature detected by a circular edge, which showed the mean correct classification rate (MCCR) and kappa statistic (K) as 0.98 and 0.97, respectively. In addition, Flores et al [31] proposed a support vector machine (SVM)based model with eye state features, which were extracted using a condensation algorithm, with an accuracy of 93%.…”
Section: B Behavioral Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Behavioral-based vigilance estimation methods use features that contain mouth states [28], [29], eye states [30]- [32], facial expressions [33], and body posture [34] collected by a video device (e.g., camera, Infrared illuminator) to compute the detection accuracy rate. Alioua et al [28] proposed the circular Hough transform (CHT)-based approach using mouth state feature detected by a circular edge, which showed the mean correct classification rate (MCCR) and kappa statistic (K) as 0.98 and 0.97, respectively. In addition, Flores et al [31] proposed a support vector machine (SVM)based model with eye state features, which were extracted using a condensation algorithm, with an accuracy of 93%.…”
Section: B Behavioral Methodsmentioning
confidence: 99%
“…To estimate the level of vigilance, typically, methods can be divided into four categories [6]: physiological methods [7]- [27], behavioral methods [28]- [34], subjective methods [35]- [38], and vehicle-based methods [39]- [41].…”
Section: Introductionmentioning
confidence: 99%
“…Emotionsensitive facial muscles and regions (e.g., supraorbital, cheek, and perinasal areas) have also been extracted and studied [31][32][33][34][35][36][37]. Subtle changes that occur in the face, such as head motion (shaking), head pose, yawning, eye blink rate, and eye closure duration, have been utilized to detect emotions and fatigue [38][39][40][41][42][43][44]. Deep learning algorithms have also been widely applied in emotion recognition [45] [46].…”
Section: Literature Reviewmentioning
confidence: 99%
“…Significant research has been carried out in the context of traffic safety systems. Video image processing techniques are used to identify distinctive facial expressions; such as eye movements-signs of drooping eyelids, and yawning; to detect drowsy driving [2][3][4][5]. Some systems use on board sensors; such as speed and orientation sensors, GPS, and two-axis accelerometer; to extract information of the vehicle state and detect unsafe driving styles in order to provide feedback with recommended actions [6][7][8].…”
Section: Related Workmentioning
confidence: 99%