2020
DOI: 10.3390/s20164450
|View full text |Cite
|
Sign up to set email alerts
|

Storm-Drain and Manhole Detection Using the RetinaNet Method

Abstract: As key-components of the urban-drainage system, storm-drains and manholes are essential to the hydrological modeling of urban basins. Accurately mapping of these objects can help to improve the storm-drain systems for the prevention and mitigation of urban floods. Novel Deep Learning (DL) methods have been proposed to aid the mapping of these urban features. The main aim of this paper is to evaluate the state-of-the-art object detection method RetinaNet to identify storm-drain and manhole in urban areas in str… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
10
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
8

Relationship

1
7

Authors

Journals

citations
Cited by 27 publications
(12 citation statements)
references
References 36 publications
0
10
0
Order By: Relevance
“…RetinaNet combines the advantages of multiple target recognition methods, especially the “anchor” concept introduced by Region Proposal Network (RPN) [ 14 ], and the use of feature pyramids in Single Shot Multibox Detector (SSD) [ 20 ] and Feature Pyramid Networks (FPN) [ 21 ]. Retinanet has a wide range of applications, such as ship detection in remote sensing images of different resolutions [ 22 ], identification of storm drains and manholes in urban areas [ 23 ], fly identification [ 24 ], and rail surface crack detection [ 25 ]. The experiment described in this paper was conducted on a computer equipped with Intel ® Xen(R) W-2145 CPU and NVIDIA GeForceRTX 2080Ti.…”
Section: Methodsmentioning
confidence: 99%
“…RetinaNet combines the advantages of multiple target recognition methods, especially the “anchor” concept introduced by Region Proposal Network (RPN) [ 14 ], and the use of feature pyramids in Single Shot Multibox Detector (SSD) [ 20 ] and Feature Pyramid Networks (FPN) [ 21 ]. Retinanet has a wide range of applications, such as ship detection in remote sensing images of different resolutions [ 22 ], identification of storm drains and manholes in urban areas [ 23 ], fly identification [ 24 ], and rail surface crack detection [ 25 ]. The experiment described in this paper was conducted on a computer equipped with Intel ® Xen(R) W-2145 CPU and NVIDIA GeForceRTX 2080Ti.…”
Section: Methodsmentioning
confidence: 99%
“…In our implementation, we found that d = 0.3 m and C min = 10,000 works well to remove the majority of these ghost points without removing other relevant clusters. The ground segmentation is performed by applying a cloth simulation filter [26]. This filter inverts the point cloud along the Z-direction and simulates a cloth being dropped on top of it.…”
Section: Sor CC Csf and Roi Filteringmentioning
confidence: 99%
“…The classification threshold is the minimum distance to classify a point as ground or nonground. The optimal parameter settings are based on the findings in [26] and a few experimental tests. We chose 1, 3 and 0.3 m for the grid resolution, rigidity and classification threshold, respectively.…”
Section: Sor CC Csf and Roi Filteringmentioning
confidence: 99%
“…First Faster R-CNN detects utility poles and the second one crop on the top half of these detected objects to detect the cap pole missing. Moreover, RetinaNet outperformed other deep learning methods in several remote sensing applications [15,20,21]. Despite these initial efforts, there is a lack of research focusing on utility poles automatic mapping using aerial images, mainly using novel deep learning methods, such as Adaptive Training Sample Selection (ATSS).…”
Section: Introductionmentioning
confidence: 99%
“…We compared the achieved results with Faster R-CNN [23] and RetinaNet [24], which are common methods applied in remote sensing image analysis. RetinaNet, for example, outperformed other deep learning methods in several remote sensing applications [15,20,21].…”
Section: Introductionmentioning
confidence: 99%