2019 IEEE International Conference on Image Processing (ICIP) 2019
DOI: 10.1109/icip.2019.8803635
|View full text |Cite
|
Sign up to set email alerts
|

Quality Metric Aggregation for HDR/WCG Images

Abstract: High Dynamic Range (HDR) and Wide Color Gamut (WCG) screens are able to display images with brighter and darker pixels with more vivid colors than ever. Automatically assessing the quality of these HDR/WCG images is of critical importance to evaluate the performances of image compression schemes. In recent years, full-reference metrics, such as HDR-VDP-2, PU-encoding metrics, have been designed for this purpose. However, none of these metrics consider chromatic artifacts. In this paper, we propose our own full… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
8
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 7 publications
(8 citation statements)
references
References 21 publications
(26 reference statements)
0
8
0
Order By: Relevance
“…• We present the best color difference quality metric that has a certain level of simplicity (eg, no filter banks). As such, we will not evaluate the currently best performing image quality metrics for HDR imaging such as HDR-VDP-2 35,36 or HDR-VQM, 37 or newer approaches combining such metrics using machine learning techniques [38][39][40][41][42] which are significantly more computationally complex than the metrics described in this article. • Since we have previously shown that the per-pixel ΔE ITP metric outperforms other per-pixel metrics 1 , in this article, we motivate and present two spatial extensions to the ΔE ITP metric that produce state-of-the-art results when compared to per-pixel color difference metrics.…”
Section: Contributionsmentioning
confidence: 99%
See 1 more Smart Citation
“…• We present the best color difference quality metric that has a certain level of simplicity (eg, no filter banks). As such, we will not evaluate the currently best performing image quality metrics for HDR imaging such as HDR-VDP-2 35,36 or HDR-VQM, 37 or newer approaches combining such metrics using machine learning techniques [38][39][40][41][42] which are significantly more computationally complex than the metrics described in this article. • Since we have previously shown that the per-pixel ΔE ITP metric outperforms other per-pixel metrics 1 , in this article, we motivate and present two spatial extensions to the ΔE ITP metric that produce state-of-the-art results when compared to per-pixel color difference metrics.…”
Section: Contributionsmentioning
confidence: 99%
“… Given the current trends toward widespread adoption of HDR and WCG, we provide insight into the differences between HDR and SDR and motivate why developing a metric for HDR and WCG applications is important. We present the best color difference quality metric that has a certain level of simplicity (eg, no filter banks). As such, we will not evaluate the currently best performing image quality metrics for HDR imaging such as HDR‐VDP‐2 35,36 or HDR‐VQM, 37 or newer approaches combining such metrics using machine learning techniques 38‐42 which are significantly more computationally complex than the metrics described in this article. Since we have previously shown that the per‐pixel Δ E ITP metric outperforms other per‐pixel metrics 1 , in this article, we motivate and present two spatial extensions to the Δ E ITP metric that produce state‐of‐the‐art results when compared to per‐pixel color difference metrics. We presented preliminary versions of the spatial extension of Δ E ITP in our previous work 2,3 .…”
Section: Introductionmentioning
confidence: 99%
“…On the other hand, the use of multiple objec-tive metrics computed on these viewports allow our method to have a good performance for the complex and diversifying nature of visual distortions appearing in 360-degree videos. Indeed, previous work in both traditional 2D [8,9,10] and 360-degree [7] VQA have recognized that even with the multitude of available objective IQA metrics, there is no single one that always performs best for all distortions. The combination of multiple metrics is thus a promising approach that can take advantages of the power of individual metrics to correlate with subjective scores on different distortions [11,10].…”
Section: Introductionmentioning
confidence: 99%
“…Indeed, previous work in both traditional 2D [8,9,10] and 360-degree [7] VQA have recognized that even with the multitude of available objective IQA metrics, there is no single one that always performs best for all distortions. The combination of multiple metrics is thus a promising approach that can take advantages of the power of individual metrics to correlate with subjective scores on different distortions [11,10]. Experimental results, based on the largest publicly available 360-degree video quality dataset, VQA-ODV [12], show the viability of our proposal, which outperforms state-of-the-art methods for 360-degree quality assessment.…”
Section: Introductionmentioning
confidence: 99%
“…On the other hand, the use of multiple objective metrics computed on these viewports allows our method to have a good performance for the complex and diverse nature of visual distortions appearing in 360-degree videos. Indeed, previous work in both traditional 2D [25], [32], [34] and 360-degree [3] VQA have recognized that even with the multitude of available objective IQA metrics, there is no single one that always performs best for all 1. Compared to [4], we extend the individual features used by our model, propose an adapted temporal pooling method, and provide an extensive new set of experiments, including different regression methods and a crossdataset validation.…”
mentioning
confidence: 99%