In this paper, we generated intelligent self-driving policies that minimize the injury severity in unexpected traffic signal violation scenarios at an intersection using the deep reinforcement learning. We provided guidance on reward engineering in terms of the multiplicity of objective function. We used a deep deterministic policy gradient method in the simulated environment to train self-driving agents. We designed two agents, one with a single-objective reward function of collision avoidance and the other with a multi-objective reward function of both collision avoidance and goal-approaching. We evaluated their performances by comparing the percentages of collision avoidance and the average injury severity against those of human drivers and an autonomous emergency braking (AEB) system. The percentage of collision avoidance of our agents were 78.89% higher than human drivers and 84.70% higher than the AEB system. The average injury severity score of our agents were only 8.92% of human drivers and 6.25% of the AEB system.
Recent deep learning techniques promise high hopes for self-driving cars while there are still many issues to be addressed such as uncertainties (e.g., extreme weather conditions) in learned models. In this work for the uncertainty- aware lane keeping, we first propose a convolutional mixture density network (CMDN) model that estimates the lateral position error, the yaw angle error, and their corresponding uncertainties from the camera vision. We then establish a vision-based uncertainty-aware lane keeping strategy in which a high-level reinforcement learning policy hierarchically modulates the reference longitudinal speed as well as the low-level lateral control. Finally, we evaluate the robustness of our strategy against the uncertainties of the learned CMDN model coming from unseen or noisy situations, as compared to the conventional lane keeping strategy with- out taking into account such uncertainties. Our uncertainty- aware strategy outperformed the conventional lane keeping strategy, without a lane departure in our test scenario during high-uncertainty periods with random occurrences of fog and rain situations on the road. The successfully trained deep reinforcement learning agent slows down the vehicle speed and makes the steering angle neutral during high uncertainty situations similarly to what human drivers would do in such situations.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.