2021 International Conference on Artificial Intelligence (ICAI) 2021
DOI: 10.1109/icai52203.2021.9445200
|View full text |Cite
|
Sign up to set email alerts
|

Motion Planning for a Snake Robot using Double Deep Q-Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 12 publications
(3 citation statements)
references
References 12 publications
0
3
0
Order By: Relevance
“…To improve environmental adaptability of snake robots, a DRL-based framework with a double deep Q-learning-based technique is proposed to learn the optimal policy for reaching goal points in unknown environments in ref. [25]. However, main variables of different unknown environments are frictions and stiffness but not obstacles.…”
Section: Introductionmentioning
confidence: 99%
“…To improve environmental adaptability of snake robots, a DRL-based framework with a double deep Q-learning-based technique is proposed to learn the optimal policy for reaching goal points in unknown environments in ref. [25]. However, main variables of different unknown environments are frictions and stiffness but not obstacles.…”
Section: Introductionmentioning
confidence: 99%
“…A simple sliding mode (SM) control theory was proposed for a differential drive‐enabled wheeled mobile robot (Dagci et al, 2013). To reach the goal point from a random start point, a double‐deep Q‐learning‐based technique to learn the optimal policy was proposed (Khan et al, 2021). To ensure the gait performs well in discrete action spaces and continuous state spaces (Shi et al, 2020), the Deep Q‐Network algorithm was employed to obtain a novel, efficient gait.…”
Section: Introductionmentioning
confidence: 99%
“…Considering the model-free feature of the reinforcement learning algorithm [28], combined with the offline training of the neural network, it can be directly used for the lateral control of autonomous cars and has a good compensation effect for the nonlinear characteristics of vehicles. Some recent work that uses Double Deep Q-learning Network •3• (DDQN) for path tracking shows promising results [29]. Furthermore, some researcher studies the possibility of combining MPC and deep reinforcement learning.…”
Section: Introductionmentioning
confidence: 99%