2018 IEEE International Conference on Consumer Electronics (ICCE) 2018
DOI: 10.1109/icce.2018.8326308
|View full text |Cite
|
Sign up to set email alerts
|

Driver's gaze zone estimation by transfer learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
12
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 10 publications
(12 citation statements)
references
References 13 publications
0
12
0
Order By: Relevance
“…However, the processing time was prolonged, particularly because the driver's gaze position was calculated from the extracted features using three VGG-face 16 models. In [18], a driver's gaze position was calculated by transfer learning using a trained model. For the trained model, the iTracker CNN, used by iTracker Team in Apple phones and tablets to estimate gaze regions, was used.…”
Section: ) Deep Feature-based Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…However, the processing time was prolonged, particularly because the driver's gaze position was calculated from the extracted features using three VGG-face 16 models. In [18], a driver's gaze position was calculated by transfer learning using a trained model. For the trained model, the iTracker CNN, used by iTracker Team in Apple phones and tablets to estimate gaze regions, was used.…”
Section: ) Deep Feature-based Methodsmentioning
confidence: 99%
“…-Depends on the detection of facial landmarks [1,2,[4][5][6][7]9,10] Deep feature-based Using AlexNet, VGG net-16 [11], AlexNet [14], SqueezeNet [15], VGG-face 16 [17], iTracker model [18] The accuracy of gaze detection is scarcely reduced owing to the rotation of the driver's face.…”
Section: Handcrafted Featurebasedmentioning
confidence: 99%
“…2, many researchers have attempted to develop contactless monitoring systems for various applications. Previous methods for monitoring people can be divided into 1) vision-based [4][5][6][7][8][9][10][11][12][13], 2) wearable-based [14][15][16][17][18][19][20][21], and 3) head-gaze-based technologies [22][23][24][25][26][27][28][29][30][31]. Vision-based methods usually detect and track the pose or movements of medically vulnerable people using user images captured by cameras.…”
Section: Figure 1: Examples Of Medically Vulnerable Peoplementioning
confidence: 99%
“…For example, head-gaze estimation systems estimate where a user is looking (e.g., a side or rear mirror, etc.) while driving a vehicle [22][23][24][25][26][27][28][29][30][31] (refer to Fig. 3b).…”
Section: Figure 1: Examples Of Medically Vulnerable Peoplementioning
confidence: 99%
“…Tawari et al [7] used a multiple camera vision algorithm to monitor and detect driver behavior in a variety of occlusion and lighting conditions. Additional studies have been conducted to develop models to analyze head motion using video capture methods, as seen in [8]- [10]. While video methods enable a high degree of visual clarity for processing driver gaze, the privacy concern of being recorded is undesirable.…”
Section: Introductionmentioning
confidence: 99%