The increasing availability of large-scale Global Positioning System (GPS) data stemming from in-vehicle embedded terminal devices enables the design of methods deriving road network cartographic information from drivers' recorded traces. Some machine learning approaches have been proposed in the past to train automatic road network map inference, and recently this approach has been successfully extended to infer road attributes as well, such as speed limitation or number of lanes. In this paper, we address the problem of detecting traffic signals from a set of vehicle speed profiles, under a classification perspective. Each data instance is a speed versus distance plot depicting over a hundred profiles on a 100-meter-long road span. We proposed three different ways of deriving features: the first one relies on the raw speed measurements; the second one uses image recognition techniques; and the third one is based on functional data analysis. We input them into most commonly used classification algorithms and a comparative analysis demonstrated that a functional description of speed profiles with wavelet transforms seems to outperform the other approaches with most of the tested classifiers. It also highlighted
Many different websites offer the opportunity to share and download landmarks and routes produced by the crowd. Landmarks near to a route or routes passing near to some landmarks may help in the context of mountain rescue. Therefore, it is necessary to identify relevant data sources and to describe their characteristics. In this paper, we set out to explore the potential of crowdsourced data in order to be considered such as data sources in the context of mountain rescue. Thus, our aim is to study the content of different sources to have a better knowledge on how landmarks and routes are mapped, to demonstrate the complementarity of crowdsourced data with respect with authoritative data, and to study the feasibility of defining links between routes and landmarks. The proposed method used integration techniques such as map matching, route construction and data matching. The results show that the large number of nonmatched features proves the richness of crowdsourced data. The matching results generate new semantic rules for both type of landmarks and geometries of route.
Land use and land cover (LULC) mapping is often undertaken by national mapping agencies, where these LULC products are used for different types of monitoring and reporting applications. Updating of LULC databases is often done on a multi-year cycle due to the high costs involved, so changes are only detected when mapping exercises are repeated. Consequently, the information on LULC can quickly become outdated and hence may be incorrect in some areas. In the current era of big data and Earth observation, change detection algorithms can be used to identify changes in urban areas, which can then be used to automatically update LULC databases on a more continuous basis. However, the change detection algorithm must be validated before the changes can be committed to authoritative databases such as those produced by national mapping agencies. This paper outlines a change detection algorithm for identifying construction sites, which represent ongoing changes in LU, developed in the framework of the LandSense project. We then use volunteered geographic information (VGI) captured through the use of mapathons from a range of different groups of contributors to validate these changes. In total, 105 contributors were involved in the mapathons, producing a total of 2778 observations. The 105 contributors were grouped according to six different user-profiles and were analyzed to understand the impact of the experience of the users on the accuracy assessment. Overall, the results show that the change detection algorithm is able to identify changes in residential land use to an adequate level of accuracy (85%) but changes in infrastructure and industrial sites had lower accuracies (57% and 75 %, respectively), requiring further improvements. In terms of user profiles, the experts in LULC from local authorities, researchers in LULC at the French national mapping agency (IGN), and first-year students with a basic knowledge of geographic information systems had the highest overall accuracies (86.2%, 93.2%, and 85.2%, respectively). Differences in how the users approach the task also emerged, e.g., local authorities used knowledge and context to try to identify types of change while those with no knowledge of LULC (i.e., normal citizens) were quicker to choose ‘Unknown’ when the visual interpretation of a class was more difficult.
Abstract. In this paper, we describe a framework to find a good quality waste collection tour after a flood, without having to solve a complicated optimization problem from scratch in limited time. We model the computation of a waste collection tour as a capacitated routing problem, on the vertices or on the edges of a graph, with uncertain waste quantities and uncertain road availability. Multiple models have been conceived to manage uncertainty in routing problems, and we build on the ideas of discretizing the uncertain parameters and computing master solutions that can be adapted to propose an original method to compute efficient solutions. We first introduce our model for the progressive removal of the uncertainty, then outline our method to compute solutions: our method first considers a low-dimensional set of random variables that govern the behaviour of the problem parameters, discretizes these variables and computes a solution for each discrete point before the flood, and then uses these solutions as a basis to build operational solutions when there are enough information about the parameters of the routing problem. We then give computational tools to implement this method. We give a framework to compute the basis of solutions in an efficient way, by computing all the solutions simultaneously and sharing information (that can lead to good quality solutions) between the different problems based on how close their parameters are, and we also describe how real solutions can be derived from this basis. Our main contributions are our model for the progressive removal of uncertainty, our multi-step method to compute efficient solutions, and our intrusive framework to compute solutions on the discrete grid of parameters.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.