2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition 2018
DOI: 10.1109/cvpr.2018.00325
|View full text |Cite
|
Sign up to set email alerts
|

Defocus Blur Detection via Multi-stream Bottom-Top-Bottom Fully Convolutional Network

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
67
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 75 publications
(67 citation statements)
references
References 34 publications
0
67
0
Order By: Relevance
“…The smaller MAE, the higher the accuracy. To compare the performance of different algorithms, we used three stateof-the-art in-focus region detection models: BTBNet [43], LBP [47] and MGF [46]. Because most of the objects are fiber edges in the dataset, the state-of-the-art edge detection models, such as BDCN [42], RCF [41], HED [40] and U-Net [24], were also selected to compare with the MIDN.…”
Section: Experiments a Midn For Image In-focus Points Extractionmentioning
confidence: 99%
See 1 more Smart Citation
“…The smaller MAE, the higher the accuracy. To compare the performance of different algorithms, we used three stateof-the-art in-focus region detection models: BTBNet [43], LBP [47] and MGF [46]. Because most of the objects are fiber edges in the dataset, the state-of-the-art edge detection models, such as BDCN [42], RCF [41], HED [40] and U-Net [24], were also selected to compare with the MIDN.…”
Section: Experiments a Midn For Image In-focus Points Extractionmentioning
confidence: 99%
“…However, the experiments showed that this method cannot distinguish the low-contrast focus regions. Zhao et al [43] designed a multi-stream network (BTBNet) to detect in-focus regions. The BTBNet combines multiple convolutional layers to compose streams, and utilizes the streams to extract features in different scales.…”
mentioning
confidence: 99%
“…By extensively evaluating the learned model on the two largest publicly available pixel-wise annotated real blur localization datasets [11,35], we show that our approach generalizes adequately for both blur types, improving the performance of all the considered classic approaches and that of recent fully supervised deep CNN-based ones that require large human-labeled training sets. Experimental results show that the proposed solution can be successfully employed to train a deep blur segmentation model, either without the need for any specific blur localization dataset or by making use of a very reduced set of images exhibiting real blur degradations.…”
mentioning
confidence: 94%
“…More recently, supervised learning-based approaches [25], and particularly those based on the use of Convolutional Neural Networks (CNN), have shown enormous potential for tackling tasks that require a dense, per-pixel prediction, such as semantic segmentation [27,28], instance segmentation [29] or crowd counting via density map estimation [30]. Blur segmentation can also be viewed as one of such dense prediction tasks, and several works have already explored this approach, either for predicting both types [31,1,32] or defocus only blur [33,34,35,36]. Nevertheless, the performance gain obtained by these fully-supervised, CNN-based approaches trained end-toend is relatively modest when compared to gains in other fields.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation