2016
DOI: 10.1007/s11760-016-0873-x
|View full text |Cite
|
Sign up to set email alerts
|

A color intensity invariant low-level feature optimization framework for image quality assessment

Abstract: Image Quality Assessment (IQA) algorithms evaluate the perceptual quality of an image using evaluation scores that assess the similarity or difference between two images. We propose a new low level feature based IQA technique, which applies filter-bank decomposition and center-surround methodology. Differing from existing methods, our model incorporates color intensity adaptation and frequency scaling optimization at each filter-bank level and spatial orientation to extract and enhance perceptually significant… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
8
0

Year Published

2017
2017
2022
2022

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 8 publications
(8 citation statements)
references
References 25 publications
0
8
0
Order By: Relevance
“…In this work we have investigated the use of deep learning for distortion-generic blind image quality assessment. We report on different design choices in three different experiments, ranging from the use of features [31] 0.94 0.94 BLIINDS-II [39] 0.92 0.91 NIQE [32] 0.92 0.91 C-DIIVINE [51] 0.95 0.94 FRIQUEE [12,14] 0.95 0.93 ShearletIQM [29] 0.94 0.93 MGMSD [1] 0.97 0.97 Low Level Features [21] 0.95 0.94 Rectifier Neural Network [45] -0.96 Multi-task CNN [20] 0.95 0.95 Shallow CNN [19] 0.95 0.96 DLIQA [16] 0.93 0.93 HOSA [49] 0.95 0.95 CNN-Prewitt [27] 0.97 0.96 CNN-SVR [26] 0.97 0.96 DeepBIQ 0.98 0.97 [31] 0.93 0.91 BLIINDS-II [39] 0.93 0.91 Low Level Features [21] 0.94 0.94 Multi-task CNN [20] 0.93 0.94 HOSA [49] 0.95 0.93 DeepBIQ 0.97 0.96…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…In this work we have investigated the use of deep learning for distortion-generic blind image quality assessment. We report on different design choices in three different experiments, ranging from the use of features [31] 0.94 0.94 BLIINDS-II [39] 0.92 0.91 NIQE [32] 0.92 0.91 C-DIIVINE [51] 0.95 0.94 FRIQUEE [12,14] 0.95 0.93 ShearletIQM [29] 0.94 0.93 MGMSD [1] 0.97 0.97 Low Level Features [21] 0.95 0.94 Rectifier Neural Network [45] -0.96 Multi-task CNN [20] 0.95 0.95 Shallow CNN [19] 0.95 0.96 DLIQA [16] 0.93 0.93 HOSA [49] 0.95 0.95 CNN-Prewitt [27] 0.97 0.96 CNN-SVR [26] 0.97 0.96 DeepBIQ 0.98 0.97 [31] 0.93 0.91 BLIINDS-II [39] 0.93 0.91 Low Level Features [21] 0.94 0.94 Multi-task CNN [20] 0.93 0.94 HOSA [49] 0.95 0.93 DeepBIQ 0.97 0.96…”
Section: Discussionmentioning
confidence: 99%
“…DIIVINE [34] 0.90 0.88 BRISQUE [31] 0.93 0.91 BLIINDS-II [39] 0.93 0.91 Low Level Features [21] 0.94 0.94 Multi-task CNN [20] 0.93 0.94 HOSA [49] 0.95 0.93 DeepBIQ 0.97 0.96 [31] 0.93 0.91 BLIINDS-II [39] 0.92 0.90 MGMSD [1] 0.88 0.89 Low Level Features [21] 0.89 0.88 Multi-task CNN [20] 0.90 0.91 Shallow CNN [19] 0.90 0.92 DeepBIQ 0.95 0.95 Table 9 Median LCC and median SROCC across 100 trainval-test random splits of the TID2013.…”
Section: Methods Lcc Sroccmentioning
confidence: 99%
See 1 more Smart Citation
“…Perceptual tuning could be quite expensive and time consuming, especially when human opinion is required. In this section, our proposed models are (a) 6.38 (7.16) (b) 6.24 (6.79) (c) 6 Kim et al [16] 0.80 0.80 ---Moorthy et al [39] 0.89 0.88 ---Mittal et al [40] 0.92 0.89 ---Saad et al [41] 0.91 0.88 ---Kottayil et al [42] 0.89 0.88 ---Xu et al [35] 0.96 0.95 ---Bianco et al [7] 0 used to tune a tone enhancement method [43], and an image denoiser [44]. A more detailed treatment is presented in [23].…”
Section: Image Enhancementmentioning
confidence: 99%
“…With the boom of deep learning, convolutional neural networks have been widely applied in IQA. Early attempts utilized relatively shallow networks (Kang et al, 2014 ; Kang et al, 2015 ; Kottayil et al, 2016 ) to extract features for assessing synthetic distortions. Then, deeper networks were utilized to handle more complex distortions (Bosse et al, 2017 ; Kim and Lee, 2017 ; Ma et al, 2017 ; Yan et al, 2019 ; Zhai et al, 2020 ; Zhang J. et al, 2020 ).…”
Section: Related Workmentioning
confidence: 99%