2020
DOI: 10.1007/s11370-020-00313-y
|View full text |Cite
|
Sign up to set email alerts
|

Reinforcement learning path planning algorithm based on obstacle area expansion strategy

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
8
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
4

Relationship

0
8

Authors

Journals

citations
Cited by 22 publications
(11 citation statements)
references
References 15 publications
0
8
0
Order By: Relevance
“…There have been some researches processing the concave regions before the path planning. Chen et al 28 proposed an improved OAE-Q(λ)-learning path planning method by introducing the concave area expansion strategy to avoid repeated invalid actions when the agent falls into the obstacle area. This method fills the concave area in the map before the path planning.…”
Section: Solution To the Concave Regionmentioning
confidence: 99%
“…There have been some researches processing the concave regions before the path planning. Chen et al 28 proposed an improved OAE-Q(λ)-learning path planning method by introducing the concave area expansion strategy to avoid repeated invalid actions when the agent falls into the obstacle area. This method fills the concave area in the map before the path planning.…”
Section: Solution To the Concave Regionmentioning
confidence: 99%
“…With the rapid development of machine learning [7][8][9][10], using reinforcement learning-based methods to solve path planning problems has attracted more and more attention. Reinforcement learning can realize online learning without a tutor, so it fully meets the needs of mobile robot path planning.…”
Section: Introductionmentioning
confidence: 99%
“…Based on the traditional Q‐learning algorithm, Wang et al 9 combined greedy search and Boltzmann search to balance the randomness and purpose of the search, reducing the possibility of falling into local optimal. Chen et al 10 proposed OAE‐Q( λ )‐learning and applied it to the path planning in the complex environment, improving the traditional Q( λ )‐learning algorithm by adding the obstacle area expansion strategy.…”
Section: Introductionmentioning
confidence: 99%
“…Chen et al 10 proposed OAE-Q(𝜆)-learning and applied it to the path planning in the complex environment, improving the traditional Q(𝜆)-learning algorithm by adding the obstacle area expansion strategy.…”
Section: Introductionmentioning
confidence: 99%