2021 IEEE International Conference on Robotics and Automation (ICRA) 2021
DOI: 10.1109/icra48506.2021.9561188
|View full text |Cite
|
Sign up to set email alerts
|

Deep Reinforcement Learning for Mapless Navigation of a Hybrid Aerial Underwater Vehicle with Medium Transition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
24
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
2
2

Relationship

0
9

Authors

Journals

citations
Cited by 37 publications
(31 citation statements)
references
References 28 publications
1
24
0
Order By: Relevance
“…PPO is a classic online RL algorithm that can be used to solve the sequential decision-making problem of unmanned aerial vehicles and non-linear attitude control problems [40,48]. SAC is a representative algorithm of offline RL that can realize low-level control of quadrotors and map-free navigation and obstacle avoidance of hybrid unmanned underwater vehicles [49,50]. GAIL is a representative IL algorithm that predicts airport-airside motion of aircraft-taxi trajectories and enables mobile robots to learn to navigate in dynamic pedestrian environments in a socially desirable manner [51,52].…”
Section: Resultsmentioning
confidence: 99%
“…PPO is a classic online RL algorithm that can be used to solve the sequential decision-making problem of unmanned aerial vehicles and non-linear attitude control problems [40,48]. SAC is a representative algorithm of offline RL that can realize low-level control of quadrotors and map-free navigation and obstacle avoidance of hybrid unmanned underwater vehicles [49,50]. GAIL is a representative IL algorithm that predicts airport-airside motion of aircraft-taxi trajectories and enables mobile robots to learn to navigate in dynamic pedestrian environments in a socially desirable manner [51,52].…”
Section: Resultsmentioning
confidence: 99%
“…The network architecture used by the Deep-RL resembles the SAC network presented in the works of de Jesus et al [24], and Grando et al [25]. Fig.…”
Section: Methodsmentioning
confidence: 99%
“…From the first works regarding HUAUVs [3,10,11,12] to recent studies [5,8,13], no major contributions were added to the high-level motion planning of quadrotor-based HUAUVs branch. In recent work, Grando et al [2] presented a navigation strategy based on reinforcement learning to HUAUVs. Nevertheless, their method is limited to pre-trained cluttered environments.…”
Section: Related Workmentioning
confidence: 99%
“…Vehicles capable of acting both in the air and under the water offer a large number of applications in a variety of scenarios, some of them extreme, ranging from the extreme cold of polar regions to the hot and humid climate of rain forests. Operating and transiting between air and water in such challenging environments are difficult tasks, with many setbacks to achieve a good perception and actuation overall [2]. Most of these Hybrid Unmanned Aerial Underwater Vehicles (HUAUVs) were inspired by aerial vehicles, such as quadcopters and hexacopters [3].…”
Section: Introductionmentioning
confidence: 99%