2021
DOI: 10.1016/j.sigpro.2021.107996
|View full text |Cite
|
Sign up to set email alerts
|

Global context guided hierarchically residual feature refinement network for defocus blur detection

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
7
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 8 publications
(7 citation statements)
references
References 27 publications
0
7
0
Order By: Relevance
“…Two methods [18,21] used a similar approach of extraction from local integral values of the images but used either a fixed static threshold or from a specified range. is hard-coded threshold scheme results in performance degradation, as shown in FFT [23] LIU [9] hifst [25] SHI [35] SVD [24] LTP International Journal of Optics Figure 11 where [21] is unable to detect the regions in the image, whereas the proposed method used an adaptive threshold computed using the deviation between the neighboring pixels and performed very well. Our experiments further reveal that the results of [18,21] indicate the gaps between the extracted sharp regions, whereas the proposed method produced the filled regions approximately.…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…Two methods [18,21] used a similar approach of extraction from local integral values of the images but used either a fixed static threshold or from a specified range. is hard-coded threshold scheme results in performance degradation, as shown in FFT [23] LIU [9] hifst [25] SHI [35] SVD [24] LTP International Journal of Optics Figure 11 where [21] is unable to detect the regions in the image, whereas the proposed method used an adaptive threshold computed using the deviation between the neighboring pixels and performed very well. Our experiments further reveal that the results of [18,21] indicate the gaps between the extracted sharp regions, whereas the proposed method produced the filled regions approximately.…”
Section: Discussionmentioning
confidence: 99%
“…is provides significantly better results due to the large deviation of the region. Our results indicate that the pixel selection based on a small number of adjacent pixels affects the SHI [12] BTB [34] DNET [31] Proposed DBDF [35] KSFV [27] LBP [21] DHDE [30] HIFST [25] DBDF [35] DHDE [30] DNET [31] HIFST [25] Proposed LBP [21] KSFV [27] SHI [12] SVD [24] Precision Recall F1-Score results, whereas using a large number of adjacent pixels to determine the pixel selection leads to good results. Existing defocus blur detection methods, based on using the integral values directly, fail to operate well for the motion blur.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…The CNN is trained using blurred patches to estimate the optimum parameter based on blurriness to achieve the best sharpening result. In [36][37][38][39][40] a CNN architecture is proposed for end to end defocus map estimation, these networks were trained using either natural or synthetic images labelled at a pixel level to segment focused regions on the image, producing outstanding results. Li et al [41] on the other hand, proposed a CNN-based method to estimate spatially varying defocus blur on a single image, using synthetic data to train the network and domain transfer to bridge the gap between real and synthetic images.…”
Section: Introductionmentioning
confidence: 99%
“…Inspired by these works, the motivation of this study is to explore the application of deep neural networks in solving the challenging problem of estimating the defocus map from a single image. The main contribution of this work is that, unlike previous works [36][37][38][39][40], we treat the blurriness estimation as a self-supervised multi-class classification problem which is solved by training a CNN to classify a patch of the input image into one of the 20 levels of blurriness. The output of the CNN is a patch-based estimation of blurriness.…”
Section: Introductionmentioning
confidence: 99%