2022
DOI: 10.3390/app12199567
|View full text |Cite
|
Sign up to set email alerts
|

A Visual Saliency-Based Neural Network Architecture for No-Reference Image Quality Assessment

Abstract: Deep learning has recently been used to study blind image quality assessment (BIQA) in great detail. Yet, the scarcity of high-quality algorithms prevents from developing them further and being used in a real-time scenario. Patch-based techniques have been used to forecast the quality of an image, but they typically award the picture quality score to an individual patch of the image. As a result, there would be a lot of misleading scores coming from patches. Some regions of the image are important and can cont… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
5
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 7 publications
(5 citation statements)
references
References 37 publications
0
5
0
Order By: Relevance
“…In [36], the authors improved this approach by applying a nonlinear bilateral smoothing filtering technique and a nearest neighbor sampling approach [37]. Prior to deep feature extraction, Ryu [38] introduced a static saliency detection module first to identify those regions which humans tend to pay more attention to. In contrast, Celona and Schettini [39] devised a deep architecture which handles images at different scales.…”
Section: Literature Reviewmentioning
confidence: 99%
“…In [36], the authors improved this approach by applying a nonlinear bilateral smoothing filtering technique and a nearest neighbor sampling approach [37]. Prior to deep feature extraction, Ryu [38] introduced a static saliency detection module first to identify those regions which humans tend to pay more attention to. In contrast, Celona and Schettini [39] devised a deep architecture which handles images at different scales.…”
Section: Literature Reviewmentioning
confidence: 99%
“…In InceptionResnetV2, input images of size 299 × 299 pixels (Ryu, 2023) undergo convolutional layers to extract features (Wan et al, 2019). Inception modules capture multi-scale features with different filter sizes (Nazir et al, 2019), followed by a filter expansion layer (1 × 1 convolution without activation) to match input depth (Ryu, 2022). Output feature maps are merged through filter concatenation.…”
Section: Dataset Classification Mechanism With Inceptionresnetv2mentioning
confidence: 99%
“…IQA is crucial to make sure that the image is free from distortions like noise or blur so that the objects in the image can be identi ed clearly which will be helpful for surveillance. IQA can be categorized into two forms and they are SIQA and OIQA [2].The application of drones is becoming highly popular nowadays ranging from agriculture to military for surveillance. In agriculture, the application of drones helps many farmers to monitor crop productivity and reduces the overall farming amount [11,12].…”
Section: Introductionmentioning
confidence: 99%