The rising popularity of autonomous vehicles has led to the development of driverless racing cars, where the competitive nature of motorsport has the potential to drive innovations in autonomous vehicle technology. The challenge of racing requires the sensors, object detection and vehicle control systems to work together at the highest possible speed and computational efficiency. This paper describes an autonomous driving system for a self-driving racing vehicle application using a modest sensor suite coupled with accessible processing hardware, with an object detection system capable of a frame rate of 25fps, and a mean average precision of 92%. A modelling tool is developed in open-source software for real-time dynamic simulation of the autonomous vehicle and associated sensors, which is fully interchangeable with the real vehicle. The simulator provides performance metrics, which enables accelerated and enhanced quantitative analysis, tuning and optimisation of the autonomous control system algorithms. A design study demonstrates the ability of the simulation to assist in control system parameter tuningresulting in a 12% reduction in lap time, and an average velocity of 25 km/h -indicating the value of using simulation for the optimisation of multiple parameters in the autonomous control system.
Autonomous vehicles make use of sensors to perceive the world around them, with heavy reliance on visionbased sensors such as RGB cameras. Unfortunately, since these sensors are affected by adverse weather, perception pipelines require extensive training on visual data under harsh conditions in order to improve the robustness of downstream tasks -data that is difficult and expensive to acquire. Based on GAN and CycleGAN architectures, we propose an overall (modular) architecture for constructing datasets, which allows one to add, swap out and combine components in order to generate images with diverse weather conditions. Starting from a single dataset with ground-truth, we generate 7 versions of the same data in diverse weather, and propose an extension to augment the generated conditions, thus resulting in a total of 14 adverse weather conditions, requiring a single ground truth. We test the quality of the generated conditions both in terms of perceptual quality and suitability for training downstream tasks, using real world, out-of-distribution adverse weather extracted from various datasets. We show improvements in both object detection and instance segmentation across all conditions, in many cases exceeding 10 percentage points increase in AP, and provide the materials and instructions needed to re-construct the multi-weather dataset, based upon the original Cityscapes dataset.
Autonomous vehicles rely heavily upon their perception subsystems to 'see' the environment in which they operate. Unfortunately, the effect of varying weather conditions presents a significant challenge to object detection algorithms, and thus it is imperative to test the vehicle extensively in all conditions which it may experience. However, unpredictable weather can make real-world testing in adverse conditions an expensive and time consuming task requiring access to specialist facilities, and weatherproofing of sensitive electronics. Simulation provides an alternative to real world testing, with some studies developing increasingly visually realistic representations of the real world on powerful compute hardware. Given that subsequent subsystems in the autonomous vehicle pipeline are unaware of the visual realism of the simulation, when developing modules downstream of perception the appearance is of little consequence -rather it is how the perception system performs in the prevailing weather condition that is important. This study explores the potential of using a simple, lightweight image augmentation system in an autonomous racing vehicle -focusing not on visual accuracy, but rather the effect upon perception system performance. With minimal adjustment, the prototype system developed in this study can replicate the effects of both water droplets on the camera lens, and fading light conditions. The system introduces a latency of less than 8 ms using compute hardware that is well suited to being carried in the vehicle -rendering it ideally suited to real-time implementation that can be run during experiments in simulation, and augmented reality testing in the real world.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.