2022
DOI: 10.3390/s23010311
|View full text |Cite
|
Sign up to set email alerts
|

Unusual Driver Behavior Detection in Videos Using Deep Learning Models

Abstract: Anomalous driving behavior detection is becoming more popular since it is vital in ensuring the safety of drivers and passengers in vehicles. Road accidents happen for various reasons, including health, mental stress, and fatigue. It is critical to monitor abnormal driving behaviors in real time to improve driving safety, raise driver awareness of their driving patterns, and minimize future road accidents. Many symptoms appear to show this condition in the driver, such as facial expressions or abnormal actions… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 13 publications
(3 citation statements)
references
References 27 publications
0
1
0
Order By: Relevance
“…Later, Yan et al [ 64 ] used the SEU dataset to classify six driver behaviors (responding to phone call, eating while driving, operating the shift gear, correct driving position with hands on wheel, playing with phone while driving, and driving while smoking) via a Gaussian Mixture Model to extract skin-like regions and using CNN to generate action labels on videos, achieving a mean average precision (mAP) of 97.97%. Abosaq et al [ 65 ] proposed a customized CNN model ( Figure 5 ) to recognize normal and abnormal driver actions (including driver smoking, driver eating, driver drinking, driver calling, and driver normal) from driver videos, and achieved 95% accuracy on the prepared testing dataset. Yang et al [ 66 ] investigated the impacts of feature selection on driver cognitive distraction detection and validation in real-world non-automated and Level 2 automated driving scenarios.…”
Section: Driver State Monitoringmentioning
confidence: 99%
“…Later, Yan et al [ 64 ] used the SEU dataset to classify six driver behaviors (responding to phone call, eating while driving, operating the shift gear, correct driving position with hands on wheel, playing with phone while driving, and driving while smoking) via a Gaussian Mixture Model to extract skin-like regions and using CNN to generate action labels on videos, achieving a mean average precision (mAP) of 97.97%. Abosaq et al [ 65 ] proposed a customized CNN model ( Figure 5 ) to recognize normal and abnormal driver actions (including driver smoking, driver eating, driver drinking, driver calling, and driver normal) from driver videos, and achieved 95% accuracy on the prepared testing dataset. Yang et al [ 66 ] investigated the impacts of feature selection on driver cognitive distraction detection and validation in real-world non-automated and Level 2 automated driving scenarios.…”
Section: Driver State Monitoringmentioning
confidence: 99%
“…The framework demonstrated an accuracy of 88% on undistorted videos and 86% on distorted videos. [4] presented a deep learning system to identify abnormal behavior in drivers, using a new dataset that includes categories such as smoking, eating, drinking, calling, and normal behavior. The study employs pre-trained and fine-tuned CNN models, such as ResNet101, VGG-16, VGG-19, and a proposed CNN model, to analyze the results.…”
Section: Literature Reviewmentioning
confidence: 99%
“…Computer vision and machine learning methods are utilized to detect and identify human activities. The use of diverse video characteristics enables an understanding of human behavior and actions, which, in turn, aids in categorizing activities as either normal or abnormal [4]. However, the recognition of human activity poses challenges due to the complexity of human behavior, making it difficult to differentiate between normal and abnormal activities [5], [6].…”
Section: Introductionmentioning
confidence: 99%