Vision sensors are versatile and can capture a wide range of visual cues, such as color, texture, shape, and depth. This versatility, along with the relatively inexpensive availability of machine vision cameras, played an important role in adopting vision-based environment perception systems in autonomous vehicles (AVs). However, vision-based perception systems can be easily affected by glare in the presence of a bright source of light, such as the sun or the headlights of the oncoming vehicle at night or simply by light reflecting off snow or ice-covered surfaces; scenarios encountered frequently during driving. In this paper, we investigate various glare reduction techniques, including the proposed saturated pixel-aware glare reduction technique for improved performance of the computer vision (CV) tasks employed by the perception layer of AVs. We evaluate these glare reduction methods based on various performance metrics of the CV algorithms used by the perception layer. Specifically, we considered object detection, object recognition, object tracking, depth estimation, and lane detection which are crucial for autonomous driving. The experimental findings validate the efficacy of the proposed glare reduction approach, showcasing enhanced performance across diverse perception tasks and remarkable resilience against varying levels of glare.keywords-Autonomous vehicles, environment perception, glare reduction, dark channel prior.
I. INTRODUCTIONAn accurate and robust environmental perception system is crucial for the advancement of intelligent transportation, especially in the case of self-driving vehicles [1]. Meeting the requirements of level 5 autonomy, as specified in the J3016 [2] international standard, entails the ability to operate out of the so-called operational design domain. Instead of a carefully managed (usually urban) environment with lots of dedicated infrastructure. Autonomous vehicles (AVs) should be able to operate in uncontrollable environments, including challenging weather, glare, haze, and fog causing illumination variation, poorly marked roads, and unpredictable road users [3]. The perception layer in the AV software stack is responsible for timely perceiving the changes happening in the vehicle's environment, through various computer vision (CV) tasks, such as object detection and recognition, depth estimation, lane detection, and more. In recent times, the vision-based perception layer in AVs has gained immense popularity [4]. Several automotive companies, such as Tesla, BMW, and Mobileye have created their vision-based perception systems. This trend can be attributed to several factors, including