Object detection in road scenes is necessary to develop both autonomous vehicles and driving assistance systems. Even if deep neural networks for recognition task have shown great performances using conventional images, they fail to detect objects in road scenes in complex acquisition situations. In contrast, polarization images, characterizing the light wave, can robustly describe important physical properties of the object even under poor illumination or strong reflections. This paper shows how non-conventional polarimetric imaging modality overcomes the classical methods for object detection especially in adverse weather conditions. The efficiency of the proposed method is mostly due to the high power of the polarimetry to discriminate any object by its reflective properties and on the use of deep neural networks for object detection. Our goal by this work, is to prove that polarimetry brings a real added value compared with RGB images for object detection. Experimental results on our own dataset composed of road scene images taken during adverse weather conditions show that polarimetry together with deep learning can improve the state-of-the-art by about 20% to 50% on different detection tasks.
Road scene analysis is a fundamental task for both autonomous vehicles and ADAS systems. Nowadays, one can find autonomous vehicles that are able to properly detect objects present in the scene in good weather conditions but some improvements are left to be done when the visibility is altered. People claim that using some non conventional sensors (infra-red, Lidar, etc.) along with classical vision enhances road scene analysis but still when conditions are optimal. In this work, we present the improvements achieved using polarimetric imaging in the complex situation of adverse weather conditions. This rich modality is known for its ability to describe an object not only by its intensity but also by its physical information, even under poor illumination and strong reflection. The experimental results have shown that, using our new multimodal dataset, polarimetric imaging was able to provide generic features for both good weather conditions and adverse weather ones. By combining polarimetric images with an adapted learning model, the different detection tasks in adverse weather conditions were improved by about 27%.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.