2023
DOI: 10.1007/978-3-031-28715-2_7
|View full text |Cite
|
Sign up to set email alerts
|

Deep Reinforcement Learning for Autonomous Mobile Robot Navigation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
2
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(3 citation statements)
references
References 21 publications
0
2
0
Order By: Relevance
“…Studies have examined the integration of AI in AV research. One such study [17] proposes deep learning algorithms for the control layer of AVs. This study also talks about DRL algorithms like DQN for autonomous control Specifically, the Actor-Critic architectures, like DDPG and its successor TD3 [18], optimized action selections with respect to Q-values, providing efficient strategies for continuous action spaces.…”
Section: Related Workmentioning
confidence: 99%
“…Studies have examined the integration of AI in AV research. One such study [17] proposes deep learning algorithms for the control layer of AVs. This study also talks about DRL algorithms like DQN for autonomous control Specifically, the Actor-Critic architectures, like DDPG and its successor TD3 [18], optimized action selections with respect to Q-values, providing efficient strategies for continuous action spaces.…”
Section: Related Workmentioning
confidence: 99%
“…To bypass the acquisition and storage of global knowledge of the environment, current planning methods aim to utilize limited global cues and combine them with local sensory information about the agent and its immediate surroundings ( Tai et al, 2017 ; Zhu et al, 2017 ; Tang et al, 2020 ; Ding et al, 2022 ). Integrating the core principles of such methods with the learning capabilities inherent in modern Deep Neural Networks (DNNs) and the recent advancements in Reinforcement Learning (RL) has paved the way for achieving optimal solutions ( de Jesús Plasencia-Salgueiro, 2023 ). However, achieving optimality with Deep Reinforcement Learning (DRL) solutions requires time, computing resources and power, which are not readily available in edge solutions.…”
Section: Introductionmentioning
confidence: 99%
“…The second origin of resource requirements is the meticulously tailored reward objectives required for DRL, which make extensive training sessions and careful tuning imperative. Influential methods from DRL for policy learning ( Schulman et al, 2015 ; Schulman et al, 2017 ), Q learning ( Mnih et al, 2013 ), or their combination ( Mnih et al, 2016 ; Haarnoja et al, 2018 ) have demonstrated remarkable results in navigation tasks ( de Jesús Plasencia-Salgueiro, 2023 ). However, such methods require the precise definition of reward objectives adapted to the given task, and result in the need for extensive training sessions and significant tuning.…”
Section: Introductionmentioning
confidence: 99%