2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2020
DOI: 10.1109/cvpr42600.2020.01415
|View full text |Cite
|
Sign up to set email alerts
|

MetaIQA: Deep Meta-Learning for No-Reference Image Quality Assessment

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
129
2

Year Published

2021
2021
2022
2022

Publication Types

Select...
5
3
1
1

Relationship

0
10

Authors

Journals

citations
Cited by 306 publications
(131 citation statements)
references
References 42 publications
0
129
2
Order By: Relevance
“…Low-quality image classification can be a challenging task. In the future, more advanced techniques or operators, such as image quality assessment method [62] and data argumentation, can be investigated to improve the performance of low-quality image classification tasks.…”
Section: Discussionmentioning
confidence: 99%
“…Low-quality image classification can be a challenging task. In the future, more advanced techniques or operators, such as image quality assessment method [62] and data argumentation, can be investigated to improve the performance of low-quality image classification tasks.…”
Section: Discussionmentioning
confidence: 99%
“…[13] utilized GAN to model an active inference module, and image quality is measured based on its primary content from such module. [14] first apply meta-learning into the field of IQA, the prior knowledge is collected by a meta-network, and they adapt the pre-train knowledge to the specific domain via small-sample fine-tuning. [15] employed hyper network to generated an exclusive representation for each picture, which achieves surprise results by mapping this representation to quality score latent space.…”
Section: Image Quality Assessment Methodsmentioning
confidence: 99%
“…For example, Kang et al [7][8] proposed a multi-task shallow CNN to learn both the distortion type and the quality score; Kim and Lee [9] applied state-of-the-art FR-IQA methods to provide proxy quality scores for each image patch as the ground truth label in the pre-training stage, and the proposed network was fine-tuned by the Subjective annotations. Similarly, Da Pan et al [10] employed the U-Net to learn the local quality predicting scores previously calculated by Full-Reference IQA methods, several Dense layers were then incorporated to pool the local quality predicting scores into an overall perceptual quality score; Liang et al [11] tried to utilize similar scene as reference to provide more prior information for the IQA model; Liu et al [12] proposed to use RankNet to learn the quality rank information of image pairs in the training set, and then used the output of the second last layer to predict the quality score; Yee et al [13] tried to learn the corresponding unknown reference image from the distorted one by resorting the Generative Adversarial Networks, and to assess the perceptual quality by comparing the hallucinated reference image and the distorted image; Chiu et al [1] proposed a new IQA framework and corresponding dataset that links the IQA issue to two practical vision tasks which are image captioning and visual question answering respectively; Su et al [14] employed self-adaptive hyper network whose parameters could adjust according to image contents; Zhu et al [15] leveraged meta-learning to learn a general-purpose BIQA model from training set of several specific distortion types.…”
Section: Related Workmentioning
confidence: 99%