The concept of Volunteered Geographic Information (VGI) has recently emerged from the new Web 2.0 technologies. The OpenStreetMap project is currently the most significant example of a system based on VGI. It aims at producing free vector geographic databases using contributions from Internet users. Spatial data quality becomes a key consideration in this context of freely downloadable geographic databases. This article studies the quality of French OpenStreetMap data. It extends the work of Haklay to France, provides a larger set of spatial data quality element assessments (i.e. geometric, attribute, semantic and temporal accuracy, logical consistency, completeness, lineage, and usage), and uses different methods of quality control. The outcome of the study raises questions such as the heterogeneity of processes, scales of production, and the compliance to standardized and accepted specifications. In order to improve data quality, a balance has to be struck between the contributors' freedom and their respect of specifications. The development of appropriate solutions to provide this balance is an important research issue in the domain of user-generated content.
This paper presents a methodology developed for a study to evaluate the state of the art of automated map generalization in commercial software without applying any customization. The objectives of this study are to learn more about generic and specific requirements for automated map generalization, to show possibilities and limitations of commercial generalization software, and to identify areas for further research. The methodology had to consider all types of heterogeneity to guarantee independent testing and evaluation of available generalization solutions. The paper presents the two main steps of the methodology. The first step is the analysis of map requirements for automated generalization, which consisted of sourcing representative test cases, defining map specifications in generalization constraints, harmonizing constraints across the test cases, and analyzing the types of constraints that were defined. The second step of the methodology is the evaluation of generalized outputs. In this step, three evaluation methods were integrated to balance between human and machine evaluation and to expose possible inconsistencies. In the discussion the applied methodology is evaluated and areas for further research are identified.
In the context of geographical database generalization, this article deals with a generic process for road network selection. It is based on the geographical context, which is made explicit, and on the preservation of characteristic structure. It relies on literature that is adapted and collected. The first step is to detect significant structures and patterns of the road network such as roundabouts or highway interchanges. It allows the initial dataset to be enriched with explicit geographic structures that were implicit in the initial data. It helps both to make the geographical context explicit and to preserve characteristic structures. Then this enrichment is used as knowledge input for the following step: that is, the selection of roads in rural areas using graph theory techniques. After that, urban roads are selected by means of a block aggregation complex algorithm. Continuity between urban and rural areas is guaranteed by modelling continuity using strokes. Finally, the previously detected characteristic structures are typified to maintain their properties in the selected network. This automated process has been fully implemented on Clarity™ and tested on large datasets.t gis_1215 595..614
Abstract:With the development of location-aware devices and the success and high use of Web 2.0 techniques, citizens are able to act as sensors by contributing geographic information. In this context, data quality is an important aspect that should be taken into account when using this source of data for different purposes. The goal of the paper is to analyze the quality of crowdsourced data and to study its evolution over time. We propose two types of approaches: (1) use the intrinsic characteristics of the crowdsourced datasets; or (2) evaluate crowdsourced Points of Interest (POIs) using external datasets (i.e., authoritative reference or other crowdsourced datasets), and two different methods for each approach. The potential of the combination of these approaches is then demonstrated, to overcome the limitations associated with each individual method. In this paper, we focus on POIs and places coming from the very successful crowdsourcing project: OpenStreetMap. The results show that the proposed approaches are complementary in assessing data quality. The positive results obtained for data matching show that the analysis of data quality through automatic data matching is possible but considerable effort and attention are needed for schema matching given the heterogeneity of OSM and the representation of authoritative datasets. For the features studied, it can be noted that change over time is sometimes due to disagreements between contributors, but in most cases the change improves the quality of the data.
The perspective of European National Mapping Agencies (NMA) on the role of citizen sensing in map production was explored. The NMAs varied greatly in their engagement with the community generating volunteered geographic information (VGI) and in their future plans. From an assessment of NMA standard practices, it was evident that much VGI was acquired with a positional accuracy that, while less than that typically acquired by NMAs, actually exceeded the requirements of the nominal data capture scale used by most NMAs. Opportunities for VGI use in map revision and updating were evident, especially for agencies that use a continuous rather than cyclical updating policy. Some NMAs had also developed systems to engage with citizen sensors and examples are discussed. Only rarely was VGI used to collect data on features beyond the standard set used by the NMAs. The potential role of citizen sensing and so its current scale of use by NMAs is limited by a series of concerns, notably relating to issues of data quality, the nature and motivation of the contributors, legal issues, the sustainability of data source, and employment fears of NMA staff. Possible priorities for future research and development are identified to help ensure that the potential of VGI in mapping is realized.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.