The degree of autonomy in vehicles depends directly on the performance of their sensor systems. The transition to even more autonomously driven cars therefore requires the development of robust sensor systems with different skills. Especially in adverse and changing weather conditions (rain, snow, fog, etc.), conventional sensor systems such as cameras perform unreliably. Moreover, data evaluation has to be performed in real-time, i.e. within a fraction of seconds, in order to safely guide the car through traffic and to avoid a crash with any obstacle. Therefore, we propose to use a so called timegated-single-pixel-camera, which combines the principles of time gating and compressed sensing [1]. In a single pixel camera, the amount of recorded data can be significantly reduced compared to a conventional camera by exploiting the inherent sparsity of scenes. The lateral information is gained with the help of binary masks in front of a simple photodiode. We optimize the pattern of the masks by including them as trainable parameters within our data evaluation neural network. Additionally, our camera is able to cope with adverse weather conditions due to the underlying time gating principle. The feasibility of our method is demonstrated by simulated and measured data as well.
There exist two key aspects to enable autonomous vehicles: Robust sensors to provide all relevant data concerning the vehicle's surroundings as well as algorithms to evaluate this data in real time. Apart from radar and ultrasonic sensors, optical sensors such as lidar and cameras are state-of-the-art in prototype autonomous vehicles. In adverse weather conditions, such as e.g. fog, snow, dust, heavy rain or poorly illuminated scenes, though, those sensors do not perform reliably. Recently, we proposed to use a time-gated-single-pixel-camera to not only significantly reduce the amount of recorded data but additionally filter ballistic object photons, i.e. suppress the effect of noise from the obscuring medium. Apart from generating 3D object information, such a system can operate fast enough to deal with the highly dynamic environment as well as respect eye-safety norms. Moreover, a time-gated-single-pixel-camera offers the ability of image-free detection of all relevant objects within the scene which speeds up data evaluation as well. Here, we want to report on our progress towards realizing such a system. We will demonstrate image-free object detection on simulated data. We realize multi-object detection by generating object heat-maps for the different classes. Additionally, we discuss the difficulties we have to overcome to robustly detect objects in real measured data and shortly present our prototype setup, which we have implemented on a car together with our partners from Fraunhofer
Over the past years a lot of effort is being focused on realizing the vision of fully autonomously driving vehicles. The achievement of this goal strongly depends on the development of sensors that allow the perception of the environment by scanning it with high speed, precision and resolution. The sensors employed in autonomous vehicles typically comprise cameras, Radar and LiDAR Systems. Especially LiDAR and Camera Sensors deliver the necessary high-resolution data, but both suffer from strongly degrading signals in low visibility conditions. To guarantee the safe operation of autonomously driving vehicles the existing sensors need to be improved with respect to these conditions and new sensors need to be developed. In this contribution we present a LiDAR system design that is optimized for the operation in low visibility conditions. On one hand we address the technical details of the system such as choice of laser, detector, deflection unit and signal processing electronics. Besides the technical details of the system, we discuss the physical and technological limitations such as wavelength dependent scattering and absorption and eye safety considerations. We further give an outlook on a sensor fusion approach with a time-gated sensor with high lateral resolution for a better recognition of objects obscured by scattering media.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.