2021
DOI: 10.1109/tits.2019.2954952
|View full text |Cite
|
Sign up to set email alerts
|

Memory-Based Deep Reinforcement Learning for Obstacle Avoidance in UAV With Limited Environment Knowledge

Abstract: This paper presents our method for enabling a UAV quadrotor, equipped with a monocular camera, to autonomously avoid collisions with obstacles in unstructured and unknown indoor environments. When compared to obstacle avoidance in ground vehicular robots, UAV navigation brings in additional challenges because the UAV motion is no more constrained to a well-defined indoor ground or street environment. Horizontal structures in indoor and outdoor environments like decorative items, furnishings, ceiling fans, sign… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2

Citation Types

0
116
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 185 publications
(116 citation statements)
references
References 31 publications
0
116
0
Order By: Relevance
“…In 2013, DeepMind innovatively combined deep learning (DL) with RL to form a new hotspot in the field of artificial intelligence which is known as DRL [20]. By leveraging the decision-making capabilities of RL and the perceived capabilities of DL, DRL has been proven to be efficient at controlling UAV [21][22][23][24][25][26][27][28][29][30][31]. Zhu [21] proposed a framework for target driven visual navigation, this framework addressed some of the limitations that prevent DRL algorithms from being applied to realistic settings.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…In 2013, DeepMind innovatively combined deep learning (DL) with RL to form a new hotspot in the field of artificial intelligence which is known as DRL [20]. By leveraging the decision-making capabilities of RL and the perceived capabilities of DL, DRL has been proven to be efficient at controlling UAV [21][22][23][24][25][26][27][28][29][30][31]. Zhu [21] proposed a framework for target driven visual navigation, this framework addressed some of the limitations that prevent DRL algorithms from being applied to realistic settings.…”
Section: Related Workmentioning
confidence: 99%
“…Kersandt [27] used DQN, Double DQN, and Dueling DQN [33] in the same UAV control mission and compared each of these methods. Singla [28] designed a deep recurrent Q-Network [34] with temporal attention that exhibited significant improvements over DQN and D3QN [32] for UAV motion planning in a cluttered and unseen environment. For the autonomous landing task of UAV, Polvara R [29] introduced a sequential DQN which is comparable with DQN and human pilots while being quantitatively better in noisy conditions.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…These DL approaches usually include a module for situational awareness that generates a set of feature maps related to the state of the robotic system and its surroundings, and then such computed feature maps feed up a second module for the decision-making process. Therefore, the combination of the mentioned two modules make up a complex network that takes raw sensor data as input and generates the motion control commands for the robotic system [48,[51][52][53][54][55].…”
Section: Deep Learning In the Context Of Autonomous Collision Avoidancementioning
confidence: 99%
“…Thus, given a specific state and based on previous experience, the agent can infer which action maximizes a predefined goal. Several approaches use RL methods in order to learn effective collision avoidance policies that require experience on successful trajectories as well as on undesirable events like collisions [51,53,54,[57][58][59]. This use of simulated environments allows for collecting a large amount of data in an easy way.…”
Section: Deep Learning In the Context Of Autonomous Collision Avoidancementioning
confidence: 99%