2016
DOI: 10.1049/iet-cvi.2015.0296
|View full text |Cite
|
Sign up to set email alerts
|

‘Owl’ and ‘Lizard’: patterns of head pose and eye pose in driver gaze classification

Abstract: Abstract-Accurate, robust, inexpensive gaze tracking in the car can help keep a driver safe by facilitating the more effective study of how to improve (1) vehicle interfaces and (2) the design of future Advanced Driver Assistance Systems. In this paper, we estimate head pose and eye pose from monocular video using methods developed extensively in prior work and ask two new interesting questions. First, how much better can we classify driver gaze using head and eye pose versus just using head pose? Second, are … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
95
0

Year Published

2017
2017
2021
2021

Publication Types

Select...
5
4

Relationship

1
8

Authors

Journals

citations
Cited by 88 publications
(95 citation statements)
references
References 16 publications
(30 reference statements)
0
95
0
Order By: Relevance
“…They can be further divided into 2 categories coarsely, i.e. geometry distribution based methods [4] [39][40][41][42][43] and 3D facial model [5][6][7][8][9][10] [44][45][46][47][48][49] based methods.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…They can be further divided into 2 categories coarsely, i.e. geometry distribution based methods [4] [39][40][41][42][43] and 3D facial model [5][6][7][8][9][10] [44][45][46][47][48][49] based methods.…”
Section: Related Workmentioning
confidence: 99%
“…By looking for the projection relation between a 3D facial model and a 2D face image, head pose angles can be calculated from the elements in the rotation matrix directly (see Section III for details). Mbouna et al [5], Fridman et al [6] and Tawari et al [7] solved the rotation matrix to estimate the head pose according to a 3D facial model and corresponding 2D facial feature points directly. Bar et al [8] provided some 3D facial templates to match the 3D point cloud obtained from the depth values so as to estimate head poses by using an iterative closest point (ICP) algorithm.…”
Section: Related Workmentioning
confidence: 99%
“…appearance descriptor), which allows for an increased number of gaze zones, but not at the expense of performance, as shown by evaluating on a dataset composed of multiple drivers. Another learning based method is the work presented by Fridman et al [15] where the evaluations are done on a significantly large dataset, but the design of the features to represent the state of the head and eyes are what is causing their classifier to over fit to user based models and to not generalize well with global based models.…”
Section: A On Gaze Estimationmentioning
confidence: 99%
“…Manually annotating specific epochs of driving, as the prior studies have done, is no longer sufficient for understanding the complexities of human behavior in the context of autonomous vehicle technology (i.e., driver glance or body position over thousands of miles of Autopilot use). For example, one of many metrics that are important to understanding driver behavior is momentby-moment detection of glance region [17], [18] (see §I-C). In order to accurately extract this metric from the 2.2 billion frames of face video without the use of computer vision would require an immense investment in manual annotation, assuming the availability of an efficient annotation tool that is specifically designed for the manual glance region annotation task and can leverage distributed, online, crowdsourcing of the annotation task.…”
Section: A Naturalistic Driving Studiesmentioning
confidence: 99%