The motion distortion in LiDAR scans caused by the robot's aggressive motion and environmental terrain feature significantly impacts the positioning and mapping performance of 3D LiDAR odometry. Existing distortion correction solutions struggle to balance computational complexity and accuracy. In this letter, we propose an Adaptive Temporal Intervalbased Continuous-Time LiDAR-only Odometry, which based on straightforward and efficient linear interpolation. Our method can flexibly adjust the temporal intervals between control nodes according to the dynamics of motion and environmental degeneracy. This adaptability enhances performance across various motion states and improves the algorithm's robustness in degenerate, particularly feature-sparse, environments. We validated our method's effectiveness on multiple datasets across different platforms, achieving comparable accuracy to state-of-the-art LiDAR-only odometry methods. Notably, in situations involving aggressive motion and sparse features, our method outperforms existing LiDAR-only methods.
As an indispensable branch of machine learning (ML), reinforcement learning (RL) plays a prominent role in the decision-making process of autonomous driving (AD), which enables autonomous vehicles (AVs) to learn an optimal driving strategy through continuous interaction with the environment. This paper proposes a deep reinforcement learning (DRL)-based motion planning strategy for AD tasks in the highway scenarios where an AV merges into two-lane road traffic flow and realizes the lane changing (LC) maneuvers. We integrate the DRL model into the AD system relying on the end-to-end learning method. An improved DRL algorithm based on deep deterministic policy gradient (DDPG) is developed with well-defined reward functions. In particular, safety rules (SR), safety prediction (SP) module and trauma memory (TM) as well as the dynamic potential-based reward shaping (DPBRS) function are adopted to further enhance safety and accelerate learning of the LC behavior. For validation, the proposed DSSTD algorithm is trained and tested on the dual-computer co-simulation platform. The comparative experimental results show that our proposal outperforms other benchmark algorithms in both driving safety and efficiency.
Planning for autonomous vehicles to merge into high‐density traffic flows within limited mileage is quite challenging. Specifically, the driving trajectory will inevitably have intersections with other vehicles whose driving intentions can't be directly observed. Herein, a two‐stage algorithm framework that is decomposed into the longitudinal and lateral planning processes for online merging planning is proposed. An improved particle filter is used to estimate the driving models of surrounding vehicles for predicting their future driving intentions. Based on Monte Carlo tree search (MCTS), different action spaces are evaluated for longitudinal merging gap selection and lateral interactive merging operation, while heuristic pruning is used to reduce the computation cost. Moreover, the coefficients related to the driving styles are introduced, and their influences on merging performance are analyzed. Finally, the proposed algorithm is implemented in a two‐lane simulation environment. The results show that the proposal has outperformed other baseline methods.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.