2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2017
DOI: 10.1109/cvpr.2017.71
|View full text |Cite
|
Sign up to set email alerts
|

Spatially-Varying Blur Detection Based on Multiscale Fused and Sorted Transform Coefficients of Gradient Magnitudes

Abstract: The detection of spatially-varying blur without having any information about the blur type is a challenging task. In this paper, we propose a novel effective approach to address this blur detection problem from a single image without requiring any knowledge about the blur type, level, or camera settings. Our approach computes blur detection maps based on a novel High-frequency multiscale Fusion and Sort Transform (HiFST) of gradient magnitudes. The evaluations of the proposed approach on a diverse set of blurr… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
87
0
2

Year Published

2018
2018
2021
2021

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 100 publications
(89 citation statements)
references
References 44 publications
0
87
0
2
Order By: Relevance
“…Liu et al [32] proposed a blurred image classification and analysis framework for detecting images containing blurred regions and recognizing the blur types for those regions without needing to perform blur kernel estimation and image deblurring. Golestaneh et al [33] proposed a spatially-varying blur detection method. Kalalembang et al [34] presented a method of detecting unwanted motion blur effects.…”
Section: Related Workmentioning
confidence: 99%
“…Liu et al [32] proposed a blurred image classification and analysis framework for detecting images containing blurred regions and recognizing the blur types for those regions without needing to perform blur kernel estimation and image deblurring. Golestaneh et al [33] proposed a spatially-varying blur detection method. Kalalembang et al [34] presented a method of detecting unwanted motion blur effects.…”
Section: Related Workmentioning
confidence: 99%
“…[23], Shi et al . [11], LBP [18] and HiFST [3]. In addition, we include the performance reported for the same even subset by one of the most recent deep CNN-based defocus and motion blur detection methods in the literature, i.e., the Deep Blur Mapping approach by Ma et al .…”
Section: Self-supervised Setupmentioning
confidence: 99%
“…Furthermore, the proposed self-supervised learning method, when used to train the off-the-shelf DeepLabv3 resnet101 network and without ever observing a single image with real blur, yields better overall AUC and AP values than Ma et al . 's CNN architecture [31], whose design was tuned ad hoc for this task and trained end-to-end in a fully supervised setup over the 500 odd samples of the dataset 3 Fig.5 contains visual results for a small random subset of images affected by both types of blur (defocus blur in the top seven rows, motion blur in the bottom seven), as predicted for most of the considered methods. We can observe that, even without the utilization of any single ground truth blur segmentation annotation from the target dataset for direct supervision, our self-supervised approach obtains accurate masks, comparable in visual quality to those produced by fully-supervised deep CNN-based methods, such as [31].…”
Section: Self-supervised Setupmentioning
confidence: 99%
See 2 more Smart Citations