2017 International Conference on 3D Immersion (IC3D) 2017
DOI: 10.1109/ic3d.2017.8251913
|View full text |Cite
|
Sign up to set email alerts
|

Predicting head trajectories in 360° virtual reality videos

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
18
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 29 publications
(18 citation statements)
references
References 16 publications
0
18
0
Order By: Relevance
“…Azuma et al [16] characterized the user's head motion as position, velocity and acceleration, and proposed a predictor to derive the future head position. The authors in [17] took content-related features into account, and predicted the viewpoint based on a saliency algorithm.…”
Section: Related Workmentioning
confidence: 99%
“…Azuma et al [16] characterized the user's head motion as position, velocity and acceleration, and proposed a predictor to derive the future head position. The authors in [17] took content-related features into account, and predicted the viewpoint based on a saliency algorithm.…”
Section: Related Workmentioning
confidence: 99%
“…However, these approaches in [4], [5] use naive models and ignore video content's relation to future movement, thus are less accurate. Other existing works such as in [6]- [10] combine both the video content features and orientation of HMD to predict the future head movement. In [6], the authors use a pre-trained saliency model to predict head movement.…”
Section: A Related Workmentioning
confidence: 99%
“…Although the works in [6]- [10] use both video saliency and history head orientation to predict future head movement, these prior works do not directly treat video frame content in much detail. Since VR videos contain various scenes, and further, each scene has different regions of interest for users.…”
Section: A Related Workmentioning
confidence: 99%
“…The strategy presented by Aladagli et al in [1] extracts the saliency from the current frame with an off-the-shelf method, identifies the most salient point, and predicts the next FoV to be centered on this most salient point. It then builds recursively.…”
Section: Ic3d17mentioning
confidence: 99%