2017
DOI: 10.1109/tcsvt.2016.2515303
|View full text |Cite
|
Sign up to set email alerts
|

Blind Sharpness Prediction for Ultrahigh-Definition Video Based on Human Visual Resolution

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
7
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
3
2
1

Relationship

1
5

Authors

Journals

citations
Cited by 13 publications
(7 citation statements)
references
References 38 publications
0
7
0
Order By: Relevance
“…viewing geometry factors (viewing distance, display resolution, display size, and display types: flat or curved). Therefore, many existing QoE studies have applied viewing geometry to design a prediction model that reflects perceptual resolution [7,[33][34][35]. Figure 2(b) geometrically depicts an example of perceived pixel according to display type.…”
Section: A) Qoe Trend On 2d Displaymentioning
confidence: 99%
See 3 more Smart Citations
“…viewing geometry factors (viewing distance, display resolution, display size, and display types: flat or curved). Therefore, many existing QoE studies have applied viewing geometry to design a prediction model that reflects perceptual resolution [7,[33][34][35]. Figure 2(b) geometrically depicts an example of perceived pixel according to display type.…”
Section: A) Qoe Trend On 2d Displaymentioning
confidence: 99%
“… Foveation : The distribution of photoreceptors in the human eye is not uniform and decreases away from the center of the fovea [12,13]. This characteristic is defined as foveation and has been employed as a spatial weight of the 2D domain in many existing studies [7,12,13,3133]. For example, when a viewer gazes at a fixation point, as shown in Fig.…”
Section: Qoe On 2d Displaymentioning
confidence: 99%
See 2 more Smart Citations
“…Also, some other researchers proposed that semantic clews of multiple event recognitions should be fused by means of a deep-level learning strategy so that the issue of recognition would be solved by answering how to jointly analyse human actions, objects and scenes. That is to say, first, each type of semantic features is transmitted to an abstract path of multi-level features, with one fusion level to connect all different paths, accordingly to learn the mutually affecting relevancy of semantic clews via unsupervised transchannel coding; lastly, the question of how semantic clews compose one event and a group of events is answered by fine tuning of large-amplitude objects on the architecture [21][22][23][24]. This paper adopts a 3-layer semantic recognition approach based on key frame extraction.…”
Section: Video Semantic Analysis and Relevant Researchmentioning
confidence: 99%