No abstract
Travel-time prediction constitutes a task of high importance in transportation networks, with web mapping services like Google Maps regularly serving vast quantities of travel time queries from users and enterprises alike. Further, such a task requires accounting for complex spatiotemporal interactions (modelling both the topological properties of the road network and anticipating events-such as rush hours-that may occur in the future). Hence, it is an ideal target for graph representation learning at scale. Here we present a graph neural network estimator for estimated time of arrival (ETA) which we have deployed in production at Google Maps. While our main architecture consists of standard GNN building blocks, we further detail the usage of training schedule methods such as Meta-Gradients in order to make our model robust and production-ready. We also provide prescriptive studies: ablating on various architectural decisions and training regimes, and qualitative analyses on real-world situations where our model provides a competitive edge. Our GNN proved powerful when deployed, significantly reducing negative ETA outcomes in several regions compared to the previous production baseline (40+% in cities like Sydney).
Automated vehicle technology has recently become reliant on 3D LiDAR sensing for perception tasks such as mapping, localization and object detection. This has led to a rapid growth in the LiDAR manufacturing industry with several competing makers releasing new sensors regularly. With this increased variety of LiDARs, each with different properties such as number of laser emitters, resolution, field-of-view, and price tags, a more in-depth comparison of their characteristics and performance is required. This work compares 10 commonly used 3D LiDARs, establishing several metrics to assess their performance. Various outstanding issues with specific LiDARs were qualitatively identified. The accuracy and precision of individual LiDAR beams and accumulated point clouds are evaluated in a controlled environment at distances from 5 to 180 meters. Reflective targets were used to characterize intensity patterns and quantify the impact of surface reflectivity on accuracy and precision. A vehicle and pedestrian mannequin were also used as additional targets of interest. A thorough assessment of these LiDARs is given with their potential applicability for automated driving tasks. The data collected in these experiments and analysis tools are all shared openly.
Accurate vehicle positioning is important not only for in-car navigation systems but is also a requirement for emerging autonomous driving methods. Consumer level GPS are inaccurate in a number of driving environments such as in tunnels or areas where tall buildings cause satellite shadowing. Current vision-based methods typically rely on the integration of multiple sensors or fundamental matrix calculation which can be unstable when the baseline is small. In this paper we present a novel visual localization method which uses a visual street map and extracted SURF image features. By monitoring the difference in scale of features matched between input images and the visual street map within a Dynamic Time Warping framework, stable localization in the direction of motion is achieved without calculation of the fundamental or essential matrices. We present the system performance in real traffic environments. By comparing localization results with a high accuracy GPS ground truth, we demonstrate that accurate vehicle positioning is achieved.
Abstract-Vehicle ego-localization is an essential process for many driver assistance and autonomous driving systems. The traditional solution of GPS localization is often unreliable in urban environments where tall buildings can cause shadowing of the satellite signal and multipath propagation. Typical visual feature based localization methods rely on calculation of the fundamental matrix which can be unstable when the baseline is small.In this paper we propose a novel method which uses the scale of matched SURF image features and Dynamic Time Warping to perform stable localization. By comparing SURF feature scales between input images and a pre-constructed database, stable localization is achieved without the need to calculate the fundamental matrix. In addition, 3D information is added to the database feature points in order to perform lateral localization, and therefore lane recognition.From experimental data captured from real traffic environments, we show how the proposed system can provide high localization accuracy relative to an image database, and can also perform lateral localization to recognize the vehicle's current lane.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.