2022
DOI: 10.3390/aerospace9020101
|View full text |Cite
|
Sign up to set email alerts
|

Fast Path Planning for Long-Range Planetary Roving Based on a Hierarchical Framework and Deep Reinforcement Learning

Abstract: The global path planning of planetary surface rovers is crucial for optimizing exploration benefits and system safety. For the cases of long-range roving or obstacle constraints that are time-varied, there is an urgent need to improve the computational efficiency of path planning. This paper proposes a learning-based global path planning method that outperforms conventional searching and sampling-based methods in terms of planning speed. First, a distinguishable feature map is constructed through a traversabil… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
6
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 12 publications
(9 citation statements)
references
References 21 publications
0
6
0
Order By: Relevance
“…They have analyzed how the SVIN formulation can improve training gradients for problems with deterministic state transition dynamics and have seen that improvement empirically on the grid world dataset. Hu et al [110] proposed a learning-based global path planning method that outperforms conventional searching and sampling-based methods in terms of planning speed. They designed a hierarchical framework consisting of step iterations and block iterations.…”
Section: Application Of Reinforcement Learning Algorithmsmentioning
confidence: 99%
“…They have analyzed how the SVIN formulation can improve training gradients for problems with deterministic state transition dynamics and have seen that improvement empirically on the grid world dataset. Hu et al [110] proposed a learning-based global path planning method that outperforms conventional searching and sampling-based methods in terms of planning speed. They designed a hierarchical framework consisting of step iterations and block iterations.…”
Section: Application Of Reinforcement Learning Algorithmsmentioning
confidence: 99%
“…To address these challenges, heuristics have been integrated into DRL frameworks to enhance search speed and circumvent local optima. Hu et al proposed SP-ResNet [18], a methodology aimed at accelerating planning speeds relative to conventional search and sampling-based techniques. Within this framework, double branches of residual networks are employed to abstract global and local obstacles, therefore constraining the search space for the DRL agent and serving as a heuristic.…”
Section: Introductionmentioning
confidence: 99%
“…However, when addressing additional operational conditions, the concept of accelerated exploration does not necessarily equate to effective planning. A number of studies have been conducted on global path planning, employing different algorithms to address various environmental considerations: obstacle avoidance [3] (MDP); terramechanics [4,5] (Dijkstra), [6] (Reinforcement learning); sun-synchronous motion [7] (A*), [8] (Multi-speed spatiotemporal A*); terramechanics and power generation [9] (A*), [10] (Reinforcement learning); thermal condition, power generation, and terramechanics [11] (Dijkstra); uncertainty of the information [12] (RRT*); and hazard risk and collision avoidance [13] (A*), [14] (MDP), [15] (A*). These studies emphasize the importance of carefully selecting mathematical models and algorithms based on the specific purpose and constraints to be taken into account in the path planning process.…”
Section: Introductionmentioning
confidence: 99%
“…As a result, small rovers will cause immediate change in temperature and battery status in accordance with local lunar surface temperature as well as the sun position, which constantly changes over the course of the mission period. Therefore, it is essential to control when to move (timings of relocation), as well as where to move (path), to circumvent the variation in thermal and luminous conditions the rover will encounter [7][8][9][10][11].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation