2021
DOI: 10.1016/j.ijdrr.2021.102110
|View full text |Cite
|
Sign up to set email alerts
|

An uncertainty-aware framework for reliable disaster damage assessment via crowdsourcing

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
15
0
1

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2

Relationship

2
5

Authors

Journals

citations
Cited by 30 publications
(16 citation statements)
references
References 36 publications
0
15
0
1
Order By: Relevance
“…The crowdsourced damage assessment adopted in this study is based on the methodology presented in Khajwal and Noshadravan (2021). The responses received from the participants are inferred using two approaches.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…The crowdsourced damage assessment adopted in this study is based on the methodology presented in Khajwal and Noshadravan (2021). The responses received from the participants are inferred using two approaches.…”
Section: Resultsmentioning
confidence: 99%
“…These efforts broadly fall into two categories: crowdsourcing or participatory damage assessment and automated damage assessment using artificial intelligence (AI). Many recent studies have highlighted the use of citizen science and participatory approach to enhance data collection for infrastructure monitoring (Li et al., 2021) and disaster impact assessment (Gharaibeh et al., 2021; Khajwal & Noshadravan, 2021). On the other hand, the promise of AI has not only been encouraging in solving scientific problems in other domains (Rafiei & Adeli, 2017a), it is also emerging as a promising solution for automation and enhancing the efficiency of structural health monitoring and post‐disaster damage assessment (Xu et al., 2020).…”
Section: Introductionmentioning
confidence: 99%
“…In DoriaNET, building damage severities were merely annotated by one annotator (one of the coauthors). While the DoriaNET annotation quality was monitored and well‐controlled, 42 multiple humans may label the same building differently, particularly when the building condition does not exhibit a clear damage state, such as a common case where the damage appears to be in the range of several damage levels) 65 . In order to explore this further, we design a small‐scale experiment and conduct a comparative analysis of our uncertainty‐aware CNN predictions and collective human decisions from multiple annotators.…”
Section: Resultsmentioning
confidence: 99%
“…While the DoriaNET annotation quality was monitored and well-controlled, 42 multiple humans may label the same building differently, particularly when the building condition does not exhibit a clear damage state, such as a common case where the damage appears to be in the range of several damage levels). 65 In order to explore this further, we design a smallscale experiment and conduct a comparative analysis of our uncertainty-aware CNN predictions and collective human decisions from multiple annotators. The experiment involves 20 human annotators who assign damage labels based on the FEMA rating system to five selected building images from DoriaNET.…”
Section: Consistency Of Model Predictions and Crowdsourced Annotation...mentioning
confidence: 99%
“…Damage assessment can also be used as a consideration in making decisions. Damage Assessment contains information on the losses generated by disasters [8].…”
Section: Introductionmentioning
confidence: 99%