2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW) 2019
DOI: 10.1109/iccvw.2019.00485
|View full text |Cite
|
Sign up to set email alerts
|

CNN-Based Cross-Dataset No-Reference Image Quality Assessment

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
20
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 18 publications
(20 citation statements)
references
References 42 publications
0
20
0
Order By: Relevance
“…Those methods supposed natural images, and the machine learning was used to learn the statistics of natural images and the features of distortion. Recently the convolutional neural network (CNN) has been applied instead of the handcrafted features 23,24) . The use of a generative adversarial network has also been proposed 25) .…”
Section: Related Workmentioning
confidence: 99%
“…Those methods supposed natural images, and the machine learning was used to learn the statistics of natural images and the features of distortion. Recently the convolutional neural network (CNN) has been applied instead of the handcrafted features 23,24) . The use of a generative adversarial network has also been proposed 25) .…”
Section: Related Workmentioning
confidence: 99%
“…Module II models temporal-memory effect and it includes two sub-modules: a GRU network and a subjectively-inspired temporal pooling layer. Note that the GRU network is the unrolled version of one GRU and the parallel CNNs/FCs share weights consider pair-wise learning for mixed datasets training, while they use different loss functions for training (Yang et al 2019;Zhang et al 2019b;Krasula et al 2020). Yang et al (2019) use the margin ranking loss and the Euclidean loss.…”
Section: Mixed Datasets Trainingmentioning
confidence: 99%
“…In comparison, the performance of the EPL method based on the most proper amount of initial training data is compared with the most advanced NR-IQA methods, including: classical NR-IQA methods (BLIINDSS [30], BRISQUE [28], BWS [5], CORNIA [31], GMLOG [51], IL-NIQE [6], and FRIQUEE [34]), and DNN-based NR-IQA methods (CNN [12], RankIQA [23], BIECON [20], DIQaM [17], DIQA [22], CaHFI [52], NRVPD [53], ESD [54], VS-DDON [55], NQS-GAN [56], and ILGNet [57]). This method was also compared with the well-known DNN models, AlexNet [10], ResNet50 [48], and VGG-16 [26], which were modeled using the LIVEC database.…”
Section: Evaluation Processmentioning
confidence: 99%
“…When compared with using only hand-crafted features, the combined strategy outperforms. In addition, compared with the end-to-end deep learning methods (CNN [12], RankIQA [23], BIECON [20], DIQaM 17], DIQA [22], CaHFI [52], NRVPD [53], ESD [54], VS-DDON [55], NQS-GAN [56], and ILGNet [57]), since the above algorithms are mostly directed to synthetic distortion, the learning of the authentic distortion features is insufficient. Consequently, although it has not been adjusted by the IQA database, the proposed method is still superior to all the methods.…”
Section: Performance Comparisonmentioning
confidence: 99%