2022
DOI: 10.3390/s22062330
|View full text |Cite
|
Sign up to set email alerts
|

Defect Detection of Subway Tunnels Using Advanced U-Net Network

Abstract: In this paper, we present a novel defect detection model based on an improved U-Net architecture. As a semantic segmentation task, the defect detection task has the problems of background–foreground imbalance, multi-scale targets, and feature similarity between the background and defects in the real-world data. Conventionally, general convolutional neural network (CNN)-based networks mainly focus on natural image tasks, which are insensitive to the problems in our task. The proposed method has a network design… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
7
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 15 publications
(7 citation statements)
references
References 57 publications
0
7
0
Order By: Relevance
“…Various image segmentation algorithms have been developed, but more recently, the success of deep learning models in various vision applications has led to a large number of studies on the development of image segmentation methods using deep learning architectures. U-Net is a convolution neural network [27] originally proposed for medical imaging segmentation, but various research has shown its potential for other segmentation tasks as well [28][29][30]. The U-Net network is fast, can segment a 512 × 512 image without the need for multiple runs and allows for learning with very few labelled images.…”
Section: Unet Model-based Extraction Of Contours Of the Garments' Shapementioning
confidence: 99%
“…Various image segmentation algorithms have been developed, but more recently, the success of deep learning models in various vision applications has led to a large number of studies on the development of image segmentation methods using deep learning architectures. U-Net is a convolution neural network [27] originally proposed for medical imaging segmentation, but various research has shown its potential for other segmentation tasks as well [28][29][30]. The U-Net network is fast, can segment a 512 × 512 image without the need for multiple runs and allows for learning with very few labelled images.…”
Section: Unet Model-based Extraction Of Contours Of the Garments' Shapementioning
confidence: 99%
“…This is achieved through the extraction of image edge information, 2 the integration of deep learning technology, 3 the enhancement of deep feature fusion algorithms, 4 and the proposal of novel network architectures. 5 Various methods, such as linear array cameras and computer vision, 6 ultrasonic detection, 7 infrared characterization, 8 and others, have been proposed for the safety inspection of track and train structures. These methods have the potential to enhance the level of intelligent safety detection and enable early detection and warning of certain safety hazards.…”
Section: Introductionmentioning
confidence: 99%
“…To address conspicuous abnormal conditions, such as track cracks, foreign objects on subway sleepers, and train bearing defects, scholars frequently employ image processing technology to enable detection. This is achieved through the extraction of image edge information, 2 the integration of deep learning technology, 3 the enhancement of deep feature fusion algorithms, 4 and the proposal of novel network architectures 5 . Various methods, such as linear array cameras and computer vision, 6 ultrasonic detection, 7 infrared characterization, 8 and others, have been proposed for the safety inspection of track and train structures.…”
Section: Introductionmentioning
confidence: 99%
“…Three optimization methods of data augmentation, transfer learning, and cascade strategy were further used to improve model accuracy. Wang et al 24 introduced atrous spatial pyramid pooling and inception modules into U-Net architecture to improve the segmentation performance in subway tunnel.…”
Section: Introductionmentioning
confidence: 99%