Social media (e.g., Twitter and Facebook) can be regarded as vital sources of information during disasters to improve situational awareness (SA) and disaster management since they play a significant role in the rapid spread of information in the event of a disaster. Due to the volume of data is far beyond the capabilities of manual examination, existing works utilize natural language processing methods based on keywords, or classification models relying on features derived from text and other metadata (e.g., user profiles) to extract social media data contributing to SA and automatically categorize them into the relevant classes (e.g., damage and donation). However, the design of the classification schema and the associated information extraction methods are far less than straightforward and highly depend on: (1) the event type, (2) the study or analysis purpose, and (3) the social media platform used. To this end, this paper reviews the literature for extracting social media data and provides an overview of classification schemas that have been used to assess SA in events involving natural hazards from five different analytical perspectives (content, temporal, user, sentiment, and spatiotemporal) by discussing the prevalent topic categories, disaster event types, purpose of studies, and platforms utilized from each schema. Finally, this paper summarizes classification methods, and platforms that are most commonly used for each disaster event type, and outlines a research agenda with recommendations for future innovations.
Advances in deep learning and computer vision are making significant contributions to flood mapping, particularly when integrated with remotely sensed data. Although existing supervised methods, especially deep convolutional neural networks, have proved to be effective, they require intensive manual labeling of flooded pixels to train a multi-layer deep neural network that learns abstract semantic features of the input data. This research introduces a novel weakly supervised approach for pixel-wise flood mapping by leveraging multi-temporal remote sensing imagery and image processing techniques (e.g., Normalized Difference Water Index and edge detection) to create weakly labeled data. Using these weakly labeled data, a bi-temporal U-Net model is then proposed and trained for flood detection without the need for time-consuming and labor-intensive human annotations. Using floods from Hurricanes Florence and Harvey as case studies, we evaluated the performance of the proposed bi-temporal U-Net model and baseline models, such as decision tree, random forest, gradient boost, and adaptive boosting classifiers. To assess the effectiveness of our approach, we conducted a comprehensive assessment that (1) covered multiple test sites with varying degrees of urbanization, and (2) utilized both bi-temporal (i.e., pre- and post-flood) and uni-temporal (i.e., only post-flood) input. The experimental results showed that the proposed framework of weakly labeled data generation and the bi-temporal U-Net could produce near real-time urban flood maps with consistently high precision, recall, f1 score, IoU score, and overall accuracy compared with baseline machine learning algorithms.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.