Flood events cause substantial damage to urban and rural areas. Monitoring water extent during large-scale flooding is crucial in order to identify the area affected and to evaluate damage. During such events, spatial assessments of floodwater may be derived from satellite or airborne sensing platforms. Meanwhile, an increasing availability of smartphones is leading to documentation of flood events directly by individuals, with information shared in real-time using social media. Topographic data, which can be used to determine where floodwater can accumulate, are now often available from national mapping or governmental repositories. In this work, we present and evaluate a method for rapidly estimating flood inundation extent based on a model that fuses remote sensing, social media and topographic data sources. Using geotagged photographs sourced from social media, optical remote sensing and high-resolution terrain mapping, we develop a Bayesian statistical model to estimate the probability of flood inundation through weights-of-evidence analysis. Our experiments were conducted using data collected during the 2014 UK flood event and focus on the Oxford city and surrounding areas. Using the proposed technique, predictions of inundation were evaluated against ground-truth flood extent. The results report on the quantitative accuracy of the multisource mapping process, which obtained area under receiver operating curve values of 0.95 and 0.93 for model fitting and testing, respectively.
Predicting residential building age from map data The age of a building influences its form and fabric composition and this in turn is critical to inferring its energy performance. However, often this data is unknown. In this paper, we present a methodology to automatically identify the construction period of houses, for the purpose of urban energy modelling and simulation. We describe two major stages to achieving thisa per-building classification model and post-classification analysis to improve the accuracy of the class inferences. In the first stage, we extract measures of the morphology and neighbourhood characteristics from readily available topographic mapping, a high-resolution Digital Surface Model and statistical boundary data. These measures are then used as features within a random forest classifier to infer an age category for each building. We evaluate various predictive model combinations based on scenarios of available data, evaluating these using 5-fold cross-validation to train and tune the classifier hyper-parameters based on a sample of city properties. A separate sample estimated the best performing crossvalidated model as achieving 77% accuracy. In the second stage, we improve the inferred per-building age classification (for a spatially contiguous neighbourhood test sample) through aggregating prediction probabilities using different methods of spatial reasoning. We report on three methods for achieving this based on adjacency relations, near neighbour graph analysis and graph-cuts label optimisation. We show that post-processing can improve the accuracy by up to 8 percentage points.
Volunteer geographical information (VGI), either in the context of citizen science or the mining of social media, has proven to be useful in various domains including natural hazards, health status, disease epidemics, and biological monitoring. Nonetheless, the variable or unknown data quality due to crowdsourcing settings are still an obstacle for fully integrating these data sources in environmental studies and potentially in policy making. The data curation process, in which a quality assurance (QA) is needed, is often driven by the direct usability of the data collected within a data conflation process or data fusion (DCDF), combining the crowdsourced data into one view, using potentially other data sources as well. Looking at current practices in VGI data quality and using two examples, namely land cover validation and inundation extent estimation, this paper discusses the close links between QA and DCDF. It aims to help in deciding whether a disentanglement can be possible, whether beneficial or not, in understanding the data curation process with respect to its methodology for future usage of crowdsourced data. Analysing situations throughout the data curation process where and when entanglement between QA and DCDF occur, the paper explores the various facets of VGI data capture, as well as data quality assessment and purposes. Far from rejecting the usability ISO quality criterion, the paper advocates for a decoupling of the QA process and the DCDF step as much as possible while still integrating them within an approach analogous to a Bayesian paradigm.
A 3D model communicates more effectively than a 2D model, hence the applications of 3D city models are rapidly gaining significance in urban studies. However, presently, there is a dearth of free of cost, high-resolution 3D city models available for use. This paper offers potential solutions to this problem by providing a globally replicable methodology to generate low-cost 3D city models from open source 2D building data in conjunction with open satellite-based elevation datasets. Two geographically and morphologically different case studies were used to develop and test this methodology: the Chinese city of Shanghai and the city of Nottingham in the UK. The method is based principally on OpenStreetMap (OSM) and Advanced Land Observing Satellite World 3D digital surface model (AW3D DSM) data and use GMTED 2010 DTM data for undulating terrain. Further enhancement of the resultant 3D model, though not compulsory, uses higher resolution elevation models that are not always open source, but if available can be used (i.e., airborne LiDAR generated DTM). Further we test and develop methods to improve the accuracy of the generated 3D models, employing a small subset of high resolution data that are not open source but can be purchased with a minimal budgets. Given these scenarios of data availability are globally applicable and time-efficient for 3D building generation (where 2D building footprints are available), our proposed methodology has the potential to accelerate the production of 3D city models, and thus to facilitate their dependent applications (e.g., disaster management) wherever commercial 3D city models are unavailable.
Abstract:Creating as-built plans of building interiors is a challenging task. In this paper we present a semi-automatic modelling system for creating residential building interior plans and their integration with existing map data to produce building models. Taking a set of imprecise measurements made with an interactive mobile phone room mapping application, the system performs spatial adjustments in accordance with soft and hard constraints imposed on the building plan geometry. The approach uses an optimisation model that exploits a high accuracy building outline, such as can be found in topographic map data, and the building topology to improve the quality of interior measurements and generate a standardised output. We test our system on building plans of five residential homes. Our evaluation shows that the approach enables construction of accurate interior plans from imprecise measurements. The experiments report an average accuracy of 0.24 m, close to the 0.20 m recommended by the CityGML LoD4 specification.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.