2002
DOI: 10.1117/1.1455011
|View full text |Cite
|
Sign up to set email alerts
|

Statistical evaluation of image quality measures

Abstract: Abstract. In this work we comprehensively categorize image quality measures, extend measures defined for gray scale images to their multispectral case, and propose novel image quality measures. They are categorized into pixel difference-based, correlation-based, edge-based, spectral-based, context-based and human visual system (HVS)-based measures. Furthermore we compare these measures statistically for still image compression applications. The statistical behavior of the measures and their sensitivity to codi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
41
0
1

Year Published

2005
2005
2022
2022

Publication Types

Select...
6
3

Relationship

0
9

Authors

Journals

citations
Cited by 530 publications
(42 citation statements)
references
References 44 publications
0
41
0
1
Order By: Relevance
“…Subsequently, the participant had to view each of the seven decoded HDRVs including the hidden reference and perform a qualitative assessment as to how much the decoded HDRVs resembled the ground truth HDRV in the center. Based on their judgement, the participants positioned the corresponding thumbnails to one of the blank positions on the right labeled [1][2][3][4][5][6][7] where 1 being an HDRV with least distortion compared to the reference and 7 being the HDRV with most visible distortions. …”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Subsequently, the participant had to view each of the seven decoded HDRVs including the hidden reference and perform a qualitative assessment as to how much the decoded HDRVs resembled the ground truth HDRV in the center. Based on their judgement, the participants positioned the corresponding thumbnails to one of the blank positions on the right labeled [1][2][3][4][5][6][7] where 1 being an HDRV with least distortion compared to the reference and 7 being the HDRV with most visible distortions. …”
Section: Methodsmentioning
confidence: 99%
“…Avcıbaş et al [1] and Sheikh et al [41] evaluated a number of QA metrics on distorted still images and concluded that metrics based on spectral magnitude error, perception, absolute norm and edge stability are most suitable for detecting image artefacts. They also conclude that although multiple QA metrics perform well on multiple image datasets none of them performed at par with subjective quality assessment.…”
Section: Related Workmentioning
confidence: 99%
“…It is more difficult to perform a mathematical comparison of their performances. Thus, to adequately evaluate the quality of such metrics statistical experiments are needed [8] [9]. To this purpose, a large database of distorted test images is usually prepared, and the Mean Opinion Score (MOS) from a large number of human observers is collected.…”
Section: Introductionmentioning
confidence: 99%
“…As multiple observers can assign different fidelity scores to an image based on their subjective judgments, the observers' responses to the image have been typically pooled into a single value to provide an overall indication of the image fidelity. [1][2][3][4][5][6][8][9][10][11] However, such pooling obscures the interobserver variability, which may be important in interpreting multiple observers' response patterns.…”
Section: Introductionmentioning
confidence: 99%