Social media (i.e., Twitter, Facebook, Flickr, YouTube) and other services with user-generated content have made a staggering amount of information (and misinformation) available. Government officials seek to leverage these resources to improve services and communication with citizens. Yet, the sheer volume of social data streams generates substantial noise that must be filtered. Nonetheless, potential exists to identify issues in real time, such that emergency management can monitor and respond to issues concerning public safety. By detecting meaningful patterns and trends in the stream of messages and information flow, events can be identified as spikes in activity, while meaning can be deciphered through changes in content. This paper presents findings from a pilot study we conducted between June and December 2010 with government officials in Arlington, Virginia (and the greater National Capitol Region around Washington, DC) with a view to understanding the use of social media by government officials as well as community organizations, businesses and the public. We are especially interested in understanding social media use in crisis situations (whether severe or fairly common, such as traffic or weather crises).
Flooding is the world's most costly type of natural disaster in terms of both economic losses and human causalities. A first and essential procedure towards flood monitoring is based on identifying the area most vulnerable to flooding, which gives authorities relevant regions to focus. In this work, we propose several methods to perform flooding identification in highresolution remote sensing images using deep learning. Specifically, some proposed techniques are based upon unique networks, such as dilated and deconvolutional ones, while other was conceived to exploit diversity of distinct networks in order to extract the maximum performance of each classifier. Evaluation of the proposed methods were conducted in a high-resolution remote sensing dataset. Results show that the proposed algorithms outperformed state-of-the-art baselines, providing improvements ranging from 1 to 4% in terms of the Jaccard Index.
Deep learning fostered a leap ahead in automated skin lesion analysis in the last two years. Those models, however, are expensive to train and difficult to parameterize. Objective: We investigate methodological issues for designing and evaluating deep learning models for skin lesion analysis. We explore ten choices faced by researchers: use of transfer learning, model architecture, train dataset, image resolution, type of data augmentation, input normalization, use of segmentation, duration of training, additional use of Support Vector Machines, and test data augmentation. Methods: We perform two full factorial experiments, for five different test datasets, resulting in 2560 exhaustive trials in our main experiment, and 1280 trials in our assessment of transfer learning. We analyze both with multi-way analyses of variance (ANOVA). We use the exhaustive trials to simulate sequential decisions and ensembles, with and without the use of privileged information from the test set. Results main experiment: Amount of train data has disproportionate influence, explaining almost half the variation in performance. Of the other factors, test data augmentation and input resolution are the most influential. Deeper models, when combined, with extra data, also help. -transfer experiment: Transfer learning is critical, its absence brings huge performance penalties.simulations: Ensembles of models are the best option to provide reliable results with limited resources, without using privileged information and sacrificing methodological rigor. Conclusions and Significance: Advancing research on automated skin lesion analysis requires curating larger public datasets. Indirect use of privileged information from the test set to design the models is a subtle, but frequent methodological mistake that leads to overoptimistic results. Ensembles of models are a cost-effective alternative to the expensive full-factorial and to the unstable sequential designs.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.