Proceedings of the 9th ACM Multimedia Systems Conference 2018
DOI: 10.1145/3204949.3208139
|View full text |Cite
|
Sign up to set email alerts
|

A dataset of head and eye movements for 360° videos

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
91
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
5
2
2

Relationship

0
9

Authors

Journals

citations
Cited by 164 publications
(91 citation statements)
references
References 18 publications
0
91
0
Order By: Relevance
“…The DNN is trained offline with a simulator of our exact 360°streaming player. It is fed with the 1121 head motion traces (with a sample each 0.2s) from the open dataset [6]. The instantaneous reward function r (t ), t describing the segment playback time, is set to r (t ) = Qual F oV (t ) − γ t −T (t ) , where T (t ) is the time of the last snap triggered before t. Hyper-parameters are: buffer size is 4s, γ = 0.3, number of training epoch is 1000, number of processes (agents) is 8, fraction of data for training, validation and test is 70%, 10% and 20%, respectively, the number of units in each layer is 128.…”
Section: Building Blocksmentioning
confidence: 99%
See 1 more Smart Citation
“…The DNN is trained offline with a simulator of our exact 360°streaming player. It is fed with the 1121 head motion traces (with a sample each 0.2s) from the open dataset [6]. The instantaneous reward function r (t ), t describing the segment playback time, is set to r (t ) = Qual F oV (t ) − γ t −T (t ) , where T (t ) is the time of the last snap triggered before t. Hyper-parameters are: buffer size is 4s, γ = 0.3, number of training epoch is 1000, number of processes (agents) is 8, fraction of data for training, validation and test is 70%, 10% and 20%, respectively, the number of units in each layer is 128.…”
Section: Building Blocksmentioning
confidence: 99%
“…The video is streamed from a laptop over WiFi. The visitor will first choose which video they prefer to watch, available from the open dataset in [6]. By changing deliberately their head motion speed, they will observe they get repositioned in front of new FoVs with different frequencies.…”
Section: Building Blocksmentioning
confidence: 99%
“…47 Moreover, when shown in equirectangular projection, the 360°contents are non-Euclidean. 48 To address these problems, many studies on 360°images/videos in viewing databases, 49,50 image quality assessment, 51 and saliency detection, 52 have been conducted in recent years.…”
Section: Related Saliency Modelsmentioning
confidence: 99%
“…Privacy & Security in Eye Tracking There is a growing concern in keeping eye movement data private and secure in both real-time applications [49], and published datasets [50]. Publicly available datasets release de-identified gaze data from individuals viewing VR videos [19], the social interactions of children with ASD [22], and individual responses to emotional content such as nude imagery and faces [69]. Sensitive information, such as personality traits [36] and neurological diagnoses [47], could be linked to individuals that contributed to the aggregate data.…”
Section: Introductionmentioning
confidence: 99%