One of the primary tasks undertaken by autonomous vehicles (AVs) is object detection, which comes ahead of object tracking, trajectory estimation, and collision avoidance. Vulnerable road objects (e.g., pedestrians, cyclists, etc.) pose a greater challenge to the reliability of object detection operations due to their continuously changing behavior. The majority of commercially available AVs, and research into them, depends on employing expensive sensors. However, this hinders the development of further research on the operations of AVs. In this paper, therefore, we focus on the use of a lower-cost single-beam LiDAR in addition to a monocular camera to achieve multiple 3D vulnerable object detection in real driving scenarios, all the while maintaining real-time performance. This research also addresses the problems faced during object detection, such as the complex interaction between objects where occlusion and truncation occur, and the dynamic changes in the perspective and scale of bounding boxes. The video-processing module works upon a deep-learning detector (YOLOv3), while the LiDAR measurements are pre-processed and grouped into clusters. The output of the proposed system is objects classification and localization by having bounding boxes accompanied by a third depth dimension acquired by the LiDAR. Real-time tests show that the system can efficiently detect the 3D location of vulnerable objects in real-time scenarios.
In autonomous driving, object detection is considered a base step to many subsequent processes. However, object detection is challenged by loss in visibility caused by rain. Rainfall occurs in two main forms, which are streaks and streaks accumulations. Each degradation type imposes different effect on the captured videos; therefore, they cannot be mitigated in the same way. We propose a lightweight network which mitigates both types of rain degradation in real-time, without negatively affecting the object-detection task. The proposed network consists of two different modules which are used progressively. The first one is a progressive ResNet for rain streaks removal, while the second one is a transmission-guided lightweight network for rain streak accumulation removal. The network has been tested on synthetic and real rainy datasets and has been compared with state-of-the-art (SOTA) networks. Additionally, time performance evaluation has been performed to ensure real-time performance. Finally, the effect of the developed deraining network has been tested on YOLO object-detection network. The proposed network exceeded SOTA by 1.12 dB in PSNR on the average result of multiple synthetic datasets with 2.29× speedup. Finally, it can be observed that the inclusion of different lightweight stages works favorably for real-time applications and could be updated to mitigate different degradation factors such as snow and sun blare.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.