2016 IEEE International Symposium on Multimedia (ISM) 2016
DOI: 10.1109/ism.2016.0076
|View full text |Cite
|
Sign up to set email alerts
|

Classification of Image Distortions Based on Features Evaluation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
6
0

Year Published

2016
2016
2022
2022

Publication Types

Select...
3
2

Relationship

2
3

Authors

Journals

citations
Cited by 5 publications
(6 citation statements)
references
References 17 publications
0
6
0
Order By: Relevance
“…Zhai and Min [14] provided an overview of classical algorithms and recent progresses in perceptual image quality assessment. Alaql et al [15] investigated the performance of different classification techniques and features to improve classification of distortions followed by IQA. Based on neural network, Kaur A et al proposed a novel no-reference IQA method using canny magnitude and achieved excellent results on LIVE and TID2008 datasets, proving the efficiency of ANN [13].…”
Section: Related Work and Novelties And Necessity Of The Studymentioning
confidence: 99%
“…Zhai and Min [14] provided an overview of classical algorithms and recent progresses in perceptual image quality assessment. Alaql et al [15] investigated the performance of different classification techniques and features to improve classification of distortions followed by IQA. Based on neural network, Kaur A et al proposed a novel no-reference IQA method using canny magnitude and achieved excellent results on LIVE and TID2008 datasets, proving the efficiency of ANN [13].…”
Section: Related Work and Novelties And Necessity Of The Studymentioning
confidence: 99%
“…In [23, 24], we proposed an image distortion classification model that presents an efficient set of features which overcome the limitations of existing blind IQA features with regard to representing different distortion types and mixtures. In that framework [23, 24], a total of 30 features, which have been validated to provide significant information about different distortion types of an image, were selected as input to the deep learning model. These features are more efficient than many other state‐of‐the‐art blind IQA features.…”
Section: Our Approachmentioning
confidence: 99%
“…The RBM consists of m visible units v=)(v1,thickmathspacev2,thickmathspace,thickmathspacevm and n hidden units h=)(h1,thickmathspaceh2,thickmathspace,thickmathspacehn, but there are no visible‐to‐visible‐ and hidden‐to‐hidden‐connections [25]. The total of 30 statistical features that have been validated in [23, 24] to contribute significant information regarding distortion visibility were selected as input to each DBN. The DBN learns the relationship between image representation and labels in two phases; unsupervised pre‐training and supervised fine tuning.…”
Section: Our Approachmentioning
confidence: 99%
See 1 more Smart Citation
“…These probabilities are used as weight values to indicate the amount of each distortion in the image. In this work, we use our classification model that is proposed in [2] [3]. In the regression stage, the final quality score based on five trained models; one model for each distortion type.…”
Section: The Framework Structurementioning
confidence: 99%