2021
DOI: 10.1016/j.dsp.2020.102907
|View full text |Cite
|
Sign up to set email alerts
|

Optimized deep encoder-decoder methods for crack segmentation

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
60
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
1
1

Relationship

1
5

Authors

Journals

citations
Cited by 62 publications
(60 citation statements)
references
References 17 publications
0
60
0
Order By: Relevance
“…Chen et al [7] develop a rotation-invariant fully convolution network (FCN) called ARF-Crack to utilize the rotation-invariant property of cracks. Konig et al [9] introduce a new design for the decoder, which leads to improved performance. While these methods demonstrate promising results, it is important to note that they allow a tolerance margin of a few pixels between predictions and ground truth during evaluation.…”
Section: Ground Truthmentioning
confidence: 99%
See 4 more Smart Citations
“…Chen et al [7] develop a rotation-invariant fully convolution network (FCN) called ARF-Crack to utilize the rotation-invariant property of cracks. Konig et al [9] introduce a new design for the decoder, which leads to improved performance. While these methods demonstrate promising results, it is important to note that they allow a tolerance margin of a few pixels between predictions and ground truth during evaluation.…”
Section: Ground Truthmentioning
confidence: 99%
“…DRIVE has 40 color fundus photographs. For training and test sets, we split AigleRN equally into 3 folds for cross validation, split CFD into 71 training images and 46 testing images as the setting of [9,11,23], and split DRIVE into 20 training images and 20 testing images as the setting of [18,19].…”
Section: Experiments 31 Experimental Setupmentioning
confidence: 99%
See 3 more Smart Citations