2018 21st International Conference on Intelligent Transportation Systems (ITSC) 2018
DOI: 10.1109/itsc.2018.8569655
|View full text |Cite
|
Sign up to set email alerts
|

Multi-Hypothesis Multi-Model Driver's Gaze Target Tracking

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
3
2

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(6 citation statements)
references
References 24 publications
0
6
0
Order By: Relevance
“…Drivers' gaze direction estimation uses approaches discussed in Section IV-A, whereas detecting objects in the scene relies on off-the-shelf algorithms [164], classical vision pipelines [163], [165], [166], or manually annotated bounding boxes [167]. Distances to the detected objects and their relative velocities may be inferred from a stereo camera [166], provided by range sensors [166], [168]- [171], or determined using simple heuristics [164].…”
Section: E Driver Awareness Estimationmentioning
confidence: 99%
See 1 more Smart Citation
“…Drivers' gaze direction estimation uses approaches discussed in Section IV-A, whereas detecting objects in the scene relies on off-the-shelf algorithms [164], classical vision pipelines [163], [165], [166], or manually annotated bounding boxes [167]. Distances to the detected objects and their relative velocities may be inferred from a stereo camera [166], provided by range sensors [166], [168]- [171], or determined using simple heuristics [164].…”
Section: E Driver Awareness Estimationmentioning
confidence: 99%
“…Some algorithms also take into account that drivers retain information about objects for some time after looking at them [164], [171], [172], as well as other properties of the scene, such as weather and proximity of other road users [172]. For example, Schwehr et al [168], [173] model the joint probability distribution of the object states in the 2D vehicle coordinate system, object coordinates, and the driver's gaze direction in 2D to estimate which objects have been fixated or tracked. Ahlstrom et al [172] modify the AttenD algorithm (described earlier in Section IV-B) to include elements of context via additional buffers for targets of relevance which, besides traffic ahead and behind, include intersections.…”
Section: E Driver Awareness Estimationmentioning
confidence: 99%
“…The project was scheduled from 2015 to 2018 and four research assistants from three different institutes of TU Darmstadt worked together on this interdisciplinary project. Within this frame, several articles comprising new algorithms for driver intention detection and online driver adaptation [5][6][7][8][9], visual localization and mapping [10][11][12][13] and driver gaze target estimation [14][15][16][17] have been published as well as articles on safety approval of machine learning algorithms in the automotive context [18]. Many of the core ideas can be retrieved in the exemplary prototypical assistance system that is presented in this work.…”
Section: Motivationmentioning
confidence: 99%
“…For increased robustness, only 90 % of the gaze samples need to intersect with the object's bounding box within the investigated fixation duration of 250 ms in order to detect a visual fixation. Further promising work in object-of-fixation detection uses tracking assumptions such that during a fixation, motion and relative geometric relations of gaze and object should be consistent [15][16][17].…”
Section: Driver Visual Cues For Warning Strategy Adaptationmentioning
confidence: 99%
See 1 more Smart Citation