Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology 2019
DOI: 10.1145/3332165.3347887
|View full text |Cite
|
Sign up to set email alerts
|

View-Dependent Video Textures for 360° Video

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
0
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 16 publications
(1 citation statement)
references
References 27 publications
0
0
0
Order By: Relevance
“…This necessitates methods that consider similarities in viewing behavior. While existing techniques enable greater uniformity in viewing behavior (e.g., looping video textures under a gazed at region of interest [60]), or provide on-display guidance cues for where to look (e.g., Halo-and WedgeVR [36]), our goal was to allow as much viewing freedom as possible without manipulating video content. In this respect, our showed how RCEA-360VR takes advantage of regularities in head movement patterns (cf., [82]) to ensure effective fused annotations (RQ2).…”
Section: Viewport-dependency and Fusing Fine-grained Emotion Labelsmentioning
confidence: 99%
“…This necessitates methods that consider similarities in viewing behavior. While existing techniques enable greater uniformity in viewing behavior (e.g., looping video textures under a gazed at region of interest [60]), or provide on-display guidance cues for where to look (e.g., Halo-and WedgeVR [36]), our goal was to allow as much viewing freedom as possible without manipulating video content. In this respect, our showed how RCEA-360VR takes advantage of regularities in head movement patterns (cf., [82]) to ensure effective fused annotations (RQ2).…”
Section: Viewport-dependency and Fusing Fine-grained Emotion Labelsmentioning
confidence: 99%