2021
DOI: 10.1145/3450626.3459831
|View full text |Cite
|
Sign up to set email alerts
|

FovVideoVDP

Abstract: FovVideoVDP is a video difference metric that models the spatial, temporal, and peripheral aspects of perception. While many other metrics are available, our work provides the first practical treatment of these three central aspects of vision simultaneously. The complex interplay between spatial and temporal sensitivity across retinal locations is especially important for displays that cover a large field-of-view, such as Virtual and Augmented Reality displays, and associated methods, such as foveated renderin… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
35
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
1
1

Relationship

2
4

Authors

Journals

citations
Cited by 94 publications
(35 citation statements)
references
References 70 publications
0
35
0
Order By: Relevance
“…2.1). In this section, we experimentally validate whether integration of our attention-aware perceptual model improves performance of the state-of-the-art visual difference predictor FovVideoVDP [Mantiuk et al 2021] in predicting visibility of foveation artifacts under varying attention conditions. To that end, we emulate a simple foveated renderer and we separately calibrate the foveation intensity for three different attention regimes in a user study.…”
Section: Attention-aware Foveated Renderingmentioning
confidence: 89%
See 2 more Smart Citations
“…2.1). In this section, we experimentally validate whether integration of our attention-aware perceptual model improves performance of the state-of-the-art visual difference predictor FovVideoVDP [Mantiuk et al 2021] in predicting visibility of foveation artifacts under varying attention conditions. To that end, we emulate a simple foveated renderer and we separately calibrate the foveation intensity for three different attention regimes in a user study.…”
Section: Attention-aware Foveated Renderingmentioning
confidence: 89%
“…This model also captured luminance dependence and was later refit to model stationary content at higher eccentricities [Watson 2018]. Recently, models capturing eccentricity dependence for the full spatio-temporal domain were also presented [Krajancich et al 2021;Mantiuk et al 2021;.…”
Section: Eccentricity-dependent Csf Modelsmentioning
confidence: 99%
See 1 more Smart Citation
“…A more fundamental approach to creating video quality metrics includes low-level visual modelling based on psychophysical models, such as the contrast sensitivity function (CSF) [3]: the threshold at which a human observer can detect change in a given brightness pattern as a function of spatial and temporal frequency. The artefacts visible to the human visual system in early vision are regulated by the function of visual sensitivity.…”
Section: Related Workmentioning
confidence: 99%
“…Unfortunately, popular objective prediction models correlate poorly with subjective perceptions of quality by the human visual system (HVS) and depend on the systems or processes involved [2]. On the other hand, there are algorithmically complex video quality metrics (VQM) based on models of the human visual system [1], [3] , and an open question is whether complex video quality metrics based on models of the human visual system provide significantly better predictions than objective metrics. Another problem when used of visual models, is in developing video quality metrics we must represent the HVS in software, a task impeded by the limited new fundamental knowledge of the HVS perception of video content using modern equipment.…”
Section: Introductionmentioning
confidence: 99%