2022
DOI: 10.1109/tvt.2022.3195230
|View full text |Cite
|
Sign up to set email alerts
|

Real-Time Driver Behavior Detection Based on Deep Deformable Inverted Residual Network With an Attention Mechanism for Human-Vehicle Co-Driving System

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2024
2024
2025
2025

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 16 publications
(2 citation statements)
references
References 29 publications
0
2
0
Order By: Relevance
“…Considering that distraction behavior recognition is a finegrained image classification task, to improve the ability of the model to extract subtle features from images with small differences, Li, et al [16] guided the model to learn robust features based on the loss function of the contrast learning and stop-gradient strategies. In addition, the improvement of the classification performance of deep learning models for distraction behaviors by applying attention mechanisms and prior knowledge is also an important research idea [17][18][19][20]. Lu, et al [18] applied the attention channel to convolutional weights, and fused global and keypoint features from driving images of different scales.…”
Section: B Deep Learning Feature Extraction and Classificationmentioning
confidence: 99%
See 1 more Smart Citation
“…Considering that distraction behavior recognition is a finegrained image classification task, to improve the ability of the model to extract subtle features from images with small differences, Li, et al [16] guided the model to learn robust features based on the loss function of the contrast learning and stop-gradient strategies. In addition, the improvement of the classification performance of deep learning models for distraction behaviors by applying attention mechanisms and prior knowledge is also an important research idea [17][18][19][20]. Lu, et al [18] applied the attention channel to convolutional weights, and fused global and keypoint features from driving images of different scales.…”
Section: B Deep Learning Feature Extraction and Classificationmentioning
confidence: 99%
“…(15) (16) The process of extracting the category token can be represented as (17) where CT represents the separation of the category token from tokens and TE represents the Transformer encoder.…”
Section: 1mentioning
confidence: 99%