2019
DOI: 10.1016/j.image.2019.08.004
|View full text |Cite
|
Sign up to set email alerts
|

A reduced-reference quality assessment metric for super-resolution reconstructed images with information gain and texture similarity

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
8
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 21 publications
(8 citation statements)
references
References 30 publications
0
8
0
Order By: Relevance
“…In a few papers, the authors proposed reducedreference metrics, which use low-resolution (LR) images as a reference. A popular approach is to extract structure or texture features from LR and upscaled (SR) images, compare them separately, and fuse the resulting similarity indices (Yeganeh et al, 2015;Tang et al, 2019;Fang et al, 2019). Metrics based on this idea achieve a Pearson correlation coefficient of 0.79 to 0.83 and a Spearman correlation coefficient of 0.69 to 0.85 on various datasets, depending on the implementation.…”
Section: Related Workmentioning
confidence: 99%
“…In a few papers, the authors proposed reducedreference metrics, which use low-resolution (LR) images as a reference. A popular approach is to extract structure or texture features from LR and upscaled (SR) images, compare them separately, and fuse the resulting similarity indices (Yeganeh et al, 2015;Tang et al, 2019;Fang et al, 2019). Metrics based on this idea achieve a Pearson correlation coefficient of 0.79 to 0.83 and a Spearman correlation coefficient of 0.69 to 0.85 on various datasets, depending on the implementation.…”
Section: Related Workmentioning
confidence: 99%
“…Fang et al introduced a reduced-reference quality assessment method for image SR by predicting the energy and texture similarity between LR and HR images [16]. In [10], Tang et al proposed another reduced-reference IQA algorithm for SR reconstructed images with information gain and texture similarity combining saliency detection. Ma et al proposed a no-reference metric by supervised learning on a large set of reference-free HR images [9].…”
Section: Related Workmentioning
confidence: 99%
“…The source codes of these metrics are obtained from the authors' public websites. There are two other SR-IQA metrics [10], [16] without publicly available codes and parameters, thus they are not compared in the table. From Table V, we have the following observations: First, FocusLiteNN obtains the highest PLCC on the Waterloo-15 database, and the HOSA achieves the best performance on SRCC and KRCC.…”
Section: B Performance Comparisonmentioning
confidence: 99%
See 2 more Smart Citations