Proceedings of the 2019 3rd International Conference on Computer Science and Artificial Intelligence 2019
DOI: 10.1145/3374587.3374602
|View full text |Cite
|
Sign up to set email alerts
|

Road Crack Image Segmentation Using Global Context U-net

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
9
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 9 publications
(9 citation statements)
references
References 11 publications
0
9
0
Order By: Relevance
“…Cao et al [ 8 ] replaced the U-Net encoder with ResNet34 to deal with the loss of spatial information caused by continuous pooling. Effectively avoiding gradient disappearance or gradient explosion, Chen et al [ 9 ] embedded a global context module in the U-Net network structure to give the network the ability to capture global context information, which is conducive to the detailed segmentation of pavement crack images. Augustauskas and Lipnickas [ 10 ] introduced a kind of attention based on the U-shaped network.…”
Section: Introductionmentioning
confidence: 99%
“…Cao et al [ 8 ] replaced the U-Net encoder with ResNet34 to deal with the loss of spatial information caused by continuous pooling. Effectively avoiding gradient disappearance or gradient explosion, Chen et al [ 9 ] embedded a global context module in the U-Net network structure to give the network the ability to capture global context information, which is conducive to the detailed segmentation of pavement crack images. Augustauskas and Lipnickas [ 10 ] introduced a kind of attention based on the U-shaped network.…”
Section: Introductionmentioning
confidence: 99%
“…Zhang et al, 2019). Our implementation is based on the work presented in (Chen, Liu, & Chen, 2019) and has achieved normal performance on this segmentation task.…”
Section: Resultsmentioning
confidence: 99%
“…U‐Net produces the best F1 score over non‐deep learning methods on the CrackForest dataset; there seem to be no obvious thick results but also no illustrated results about how the model performed on images with significant shadow. In Global Context U‐Net (Chen et al., 2019), U‐Net is modified to include the lightweight global context blocks that utilize the attention mechanism to make the model focus on the global context information and long‐distance dependency information. This work uses the normal binary cross‐entropy loss, and the results on the Crack500 dataset are better than the original U‐Net.…”
Section: Related Workmentioning
confidence: 99%