2019
DOI: 10.1007/s10846-019-01073-3
|View full text |Cite
|
Sign up to set email alerts
|

Towards Real-Time Path Planning through Deep Reinforcement Learning for a UAV in Dynamic Environments

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
90
0
2

Year Published

2020
2020
2024
2024

Publication Types

Select...
10

Relationship

0
10

Authors

Journals

citations
Cited by 213 publications
(92 citation statements)
references
References 9 publications
0
90
0
2
Order By: Relevance
“…The replay buffer size is 10 6 . The above hyperparameter settings are referenced from [42]. In addition, the exploration rate ε linearly decreased from 0.7 to 0.1 over a period of 1 million steps then fixed.…”
Section: Network Architecture and Hyperparameter Settingmentioning
confidence: 99%
“…The replay buffer size is 10 6 . The above hyperparameter settings are referenced from [42]. In addition, the exploration rate ε linearly decreased from 0.7 to 0.1 over a period of 1 million steps then fixed.…”
Section: Network Architecture and Hyperparameter Settingmentioning
confidence: 99%
“…As the pivotal foundation in social robot navigation, a wide variety of path planning methods have been proposed for robots navigating in different environments [7][8][9][10]. In general, these methods can be divided into global and local methods according to the completeness of map information known before path planning progress.…”
Section: Related Workmentioning
confidence: 99%
“…More critical situation such as scenarios with disaster and dynamic threats usually desires improved intelligent algorithms [18][19][20]. More recently, the development of deep learning also spawned the deep reinforcement learning based path planning [24].…”
Section: Introductionmentioning
confidence: 99%