2018
DOI: 10.1016/j.engstruct.2017.10.057
|View full text |Cite
|
Sign up to set email alerts
|

Visual data classification in post-event building reconnaissance

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
60
1
1

Year Published

2019
2019
2023
2023

Publication Types

Select...
8

Relationship

2
6

Authors

Journals

citations
Cited by 105 publications
(62 citation statements)
references
References 21 publications
0
60
1
1
Order By: Relevance
“…In other words, 78.8% of all damages annotated as Damage 1 was correctly predicted on the average, and likewise for other classes and CNN architectures. The said precision and recall values are considerably higher in comparison with that reported by Yeum et al for single class (spalling) detection (Precision: 40.48%, Recall: 62.16 %) on similar dataset. Minor cracks in concrete are typically hard to detect due to potential noise infusion .…”
Section: Resultscontrasting
confidence: 56%
“…In other words, 78.8% of all damages annotated as Damage 1 was correctly predicted on the average, and likewise for other classes and CNN architectures. The said precision and recall values are considerably higher in comparison with that reported by Yeum et al for single class (spalling) detection (Precision: 40.48%, Recall: 62.16 %) on similar dataset. Minor cracks in concrete are typically hard to detect due to potential noise infusion .…”
Section: Resultscontrasting
confidence: 56%
“…The convolution kernels in DCNN capture the spatial invariant characteristics such as edges and contrast from the input image where these features are then used to make inference about the image. Since 2017, the rapid growth of DCNNbased approaches for damage detection in civil engineering has shown huge potential (Atha & Jahanshahi, 2018;Cha, Choi, & Büyüköztürk, 2017;Cha, Choi, Suh, Mahmoudkhani, & Büyüköztürk, 2018;Chen & Jahanshahi, 2018;Gao & Mosalam, 2018;Kumar, Abraham, Jahanshahi, Iseley, & Starr, 2018;Lin, Nie, & Ma, 2017;Yeum, Dyke, Ramirez, & Benes, 2016;Yeum, Dyke, & Ramirez, 2018;Wu & Jahanshahi, 2018b;Xue & Li, 2018;Zhang et al, 2017). However, the high computation and memory demands required for DCNN make it inappropriate for deployment on mobile inspection devices, such as unmanned aerial vehicles (UAVs) and robots.…”
Section: Background and Motivationmentioning
confidence: 99%
“…For instance, if the building images used for training only contain wooden buildings, the classifier may not be sufficiently accurate when classifying images of masonry or concrete buildings (Yeum, Dyke, & Ramirez, 2018). This building classifier is trained in advance using a large volume of ground-truth building images.…”
Section: Algorithmmentioning
confidence: 99%
“…The ground-truth building images used for training the classifier must include images of buildings that have a similar appearance as the target building. For instance, if the building images used for training only contain wooden buildings, the classifier may not be sufficiently accurate when classifying images of masonry or concrete buildings (Yeum, Dyke, & Ramirez, 2018). After applying the classifier to ⟂ 0 , we determine a tight bounding box for the building in that image.…”
Section: Algorithmmentioning
confidence: 99%