2008 15th IEEE International Conference on Image Processing 2008
DOI: 10.1109/icip.2008.4711768
|View full text |Cite
|
Sign up to set email alerts
|

A no-reference perceptual image sharpness metric based on saliency-weighted foveal pooling

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
24
0

Year Published

2011
2011
2016
2016

Publication Types

Select...
4
3
3

Relationship

0
10

Authors

Journals

citations
Cited by 69 publications
(24 citation statements)
references
References 7 publications
0
24
0
Order By: Relevance
“…In [29], the block size used for finding edge pixels is 64 × 64, and a similar contribution based on JNB from the same authors is reported in [30] where a block size of 8 × 8 has been used for finding the edge pixels. The method proposed in [30] has been improved in [31] by adding the impact of saliencyweighting in foveated regions of an image. Specifically, more weighting is given to the local blur estimates that belong to salient regions of an image, while spatial blur values are pooled together to compute an overall value of blur for the whole image.…”
Section: Blurringmentioning
confidence: 99%
“…In [29], the block size used for finding edge pixels is 64 × 64, and a similar contribution based on JNB from the same authors is reported in [30] where a block size of 8 × 8 has been used for finding the edge pixels. The method proposed in [30] has been improved in [31] by adding the impact of saliencyweighting in foveated regions of an image. Specifically, more weighting is given to the local blur estimates that belong to salient regions of an image, while spatial blur values are pooled together to compute an overall value of blur for the whole image.…”
Section: Blurringmentioning
confidence: 99%
“…To this end, a variety of computational models of visual attention is implemented in different metrics by weighting local distortion maps with local saliency maps, a process referred to as "visual importance pooling" (see. e.g., [26] and [27]). The attention models used in these studies, however, are either specifically designed or chosen for a specific domain, and their accuracy in predicting human attention in general terms is not always fully proved yet.…”
Section: Added Value Of Visual Attention In Nr Blur Metricsmentioning
confidence: 99%
“…No-reference (NR) metrics estimate QoE though mainly measuring image distortions: blockiness (Leontaris & Reibman, 2005;Saad, Bovik & Charrier, 2010;Zhou, Bovik & Evan, 2000), blur (Marziliano, Dufaux, Winkler & Ebrahimi, 2002;Sadaka, Karam, Ferzli & Abousleman, 2008;Yun-Chung, Jung-Ming, Bailey, Sei-Wang & Shyang-Lih, 2004), and noise (Ghazal, Amer & Ghrayeb, 2007). An overview of existing NR image and video quality estimation studies have been given by Hemai and Reibman (Hemami & Reibman, 2010).…”
Section: Qoe Metricsmentioning
confidence: 99%