2022
DOI: 10.1007/978-3-031-19778-9_8
|View full text |Cite
|
Sign up to set email alerts
|

Look Both Ways: Self-supervising Driver Gaze Estimation and Road Scene Saliency

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
2
2

Relationship

0
7

Authors

Journals

citations
Cited by 16 publications
(4 citation statements)
references
References 45 publications
0
4
0
Order By: Relevance
“…The first such dataset, 3DSS [38], was recorded in a video game environment. More recent ones are captured either on the road with drivers wearing head-mounted eye trackers (e.g., DR(eye)VE [14] and LBW [43]) or use pre-recorded naturalistic footage that human subjects can view in the lab (e.g., BDD-A [31] and DADA-2000 [44]). However, task-relevant information that can be effectively used for training and evaluation is usually not available or incomplete.…”
Section: Related Workmentioning
confidence: 99%
“…The first such dataset, 3DSS [38], was recorded in a video game environment. More recent ones are captured either on the road with drivers wearing head-mounted eye trackers (e.g., DR(eye)VE [14] and LBW [43]) or use pre-recorded naturalistic footage that human subjects can view in the lab (e.g., BDD-A [31] and DADA-2000 [44]). However, task-relevant information that can be effectively used for training and evaluation is usually not available or incomplete.…”
Section: Related Workmentioning
confidence: 99%
“…The proposed method, based on the full image (environment and face) or the full set of features (facial and Go-CaRD features; as illustrated in Figure 2 in [ 40 ]), outperformed other DL models, such as InceptionV3, ResNet50, VGG16, and VGG19. Recently, Kasahara et al [ 41 ] presented a new dataset, called “Look Both Ways”, which contains synchronized video of both driver faces and the forward road scene for gaze estimation and road scene saliency. The Look Both Ways dataset contains 123,297 synchronized driver face and stereo scene images with ground truth 3D gaze, which were collected from 6.8 h of free driving on public roads by 28 drivers.…”
Section: Driver Gaze Analysismentioning
confidence: 99%
“…The human gaze is inherently implicit and poses a challenge for objective measurement, making gaze annotation complex in gaze data collection. Some methods capture the human gaze through intrusive devices such as eye-tracking glasses [13,19]. However, the eye-tracking glasses have a notable impact on the quality of the captured facial images.…”
Section: Gaze Data Collectionmentioning
confidence: 99%
“…In-vehicle gaze dataset usually defines different region such as windshield and left/right mirror in vehicles and perform gaze zone classification [12,14,16,18,23,30,34]. Kasahara et al [19] collects an in-vehicle gaze dataset but subjects are required to wear eye-tracking glasses, which means the dataset is not applicable in the real world. Our gaze collection system does not require dedicated devices and produces natural face images.…”
Section: Datasetsmentioning
confidence: 99%