The Virtual Autonomous Navigation Environment (VANE) is a high fidelity, physics-based simulation process that produces realistic simulated sensor output for use in the development and testing of Autonomous Mobility Systems (AMS). The VANE produces simulated sensor output for ranging and camera sensors that are characterized by a few easily determined input parameters. This flexibility allows for the efficient characterization of a sensor interaction with a particular AMS. This paper presents the development of these models and some initial results.
In the context of autonomous driving, the existing semantic segmentation concept strongly supports on-road driving where hard inter-class boundaries are enforced and objects can be categorized based on their visible structures with high confidence. Due to the well-structured nature of typical onroad scenarios, current road extraction processes are largely successful and most types of vehicles are able to traverse through the area that is detected as road. However, the off-road driving domain has many additional uncertainties such as uneven terrain structure, positive and negative obstacles, ditches, quagmires, hidden objects, etc. making it very unstructured. Traversing through such unstructured area is constrained by a vehicle's type and its capability. Therefore, an alternative approach to segmentation of the off-road driving trail is required that supports consideration of the vehicle type in a way that is not considered in state-of-the-art on-road segmentation approaches. To overcome this limitation and facilitate the path extraction in the off-road driving domain, we propose traversability concept and corresponding dataset which is based on the notion that the driving trails should be finely resolved into different sub-trails and areas corresponding to the capability of different vehicle classes in order to achieve safe traversal. Based on this, we consider three different classes of vehicles (sedan, pickup, and off-road) and label the images corresponding to the traversing capability of those vehicles. So the proposed dataset facilitates the segmentation of off-road driving trail into three regions based on the nature of the driving area and vehicle capability. We call this dataset as CaT (CAVS Traversability) dataset and is publicly available at https://www.cavs.msstate.edu/resources/downloads/CaT/CaT.tar.gz.
<div class="section abstract"><div class="htmlview paragraph">Recent developments in the area of autonomous vehicle navigation have emphasized algorithm development for the characterization of LiDAR 3D point-cloud data. The LiDAR sensor data provides a detailed understanding of the environment surrounding the vehicle for safe navigation. However, LiDAR point cloud datasets need point-level labels which require a significant amount of annotation effort. We present a framework which generates simulated labeled point cloud data. The simulated LiDAR data was generated by a physics-based platform, the Mississippi State University Autonomous Vehicle Simulator (MAVS). In this work, we use the simulation framework and labeled LiDAR data to develop and test algorithms for autonomous ground vehicle off-road navigation. The MAVS framework generates 3D point clouds for off-road environments that include trails and trees. The important first step in off-road autonomous navigation is the accurate segmentation of 3D point cloud data to identify the potential obstacles in the vehicle path. We use simulated LiDAR data to segment and detect obstacles using convolutional neural networks (CNN). Our analysis is based on SqueezeSeg, a CNN-based model for point cloud segmentation. The CNN was trained with a labelled dataset of off-road imagery generated by MAVS and evaluated on the simulated dataset. The segmentation of the LiDAR data is done by point-wise classification and the results show excellent accuracy in identifying different objects and obstacles in the vehicle path. In this paper, we evaluated the segmentation performance at different LiDAR vertical resolutions: the 8-beam and 16-beam. The results showed that there is about 5% increase in accuracy with 16-beam sensors compared with the 8-beam sensors in detecting obstacles and trees. However, the 8-beam LiDAR performance is comparable with the 16-beam sensor in segmenting vegetation, trail-road and ground.</div></div>
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.