2021
DOI: 10.1007/978-3-030-89177-0_10
|View full text |Cite
|
Sign up to set email alerts
|

Reinforcement Learning-Based Mapless Navigation with Fail-Safe Localisation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
2
1

Relationship

1
5

Authors

Journals

citations
Cited by 6 publications
(3 citation statements)
references
References 13 publications
0
3
0
Order By: Relevance
“…In [99], a system based on DQN and reinforcement learning is designed, for which a two-dimensional simulated environment is used, unlike the present case study, which uses a three-dimensional environment. Both works use relative distances and angles for the location of the robot, and in the same way, the objective of the two works is that the robot reaches the goal.…”
Section: Discussionmentioning
confidence: 99%
“…In [99], a system based on DQN and reinforcement learning is designed, for which a two-dimensional simulated environment is used, unlike the present case study, which uses a three-dimensional environment. Both works use relative distances and angles for the location of the robot, and in the same way, the objective of the two works is that the robot reaches the goal.…”
Section: Discussionmentioning
confidence: 99%
“…Gazebo has extensive sensor, robot and actuator libraries from laser range finders (Niu et al [55]), 2D/3D cameras, Kinectstyle sensors (Microsoft,[56]), contact sensors, force-torque. Many robots are provided including PR2 (Manny Ojigbo, [57]), Pioneer2 DX (Cyberbotics Ltd., [58]), iRobot Create ( iRobot Corp, [2]), Universal robot arm Liu et al [59], Kuka robot arm Niu et al 2021b [60] and TurtleBot (Open Source Robotics Foundation, Inc, [61]) (Lin et al [62]). Comparing with mobile robot and robotic arm, unmanned marine vehicle is more challenging to be simulated as it takes into account the dynamics of wind, wave, and sea current as well to help design the energy efficient control algorithm (Niu et al [63]) (Niu et al [64]) (Niu et al [65]) (Niu et al [66]) instead of just path length optimized algorithm (Niu et al [67]) (Lu et al [68]).…”
Section: Gazebo Classicmentioning
confidence: 99%
“…Although works introduced above have obtained relatively promising results, to the authors' best knowledge, none of these learning-based algorithms have considered the effect of localisation performance on the final navigation results, as illustrated in Section I, except the preliminary work [20] by the authors. They all assume the availability of ground truth robot poses during navigation and result in policies alike shortest path strategies without considering localisation quality along navigation trajectories.…”
Section: Related Workmentioning
confidence: 99%