2021
DOI: 10.48550/arxiv.2103.06403
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

A Vision Based Deep Reinforcement Learning Algorithm for UAV Obstacle Avoidance

Abstract: Integration of reinforcement learning with unmanned aerial vehicles (UAVs) to achieve autonomous flight has been an active research area in recent years. An important part focuses on obstacle detection and avoidance for UAVs navigating through an environment. Exploration in an unseen environment can be tackled with Deep Q-Network (DQN). However, value exploration with uniform sampling of actions may lead to redundant states, where often the environments inherently bear sparse rewards. To resolve this, we prese… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
2
1

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(5 citation statements)
references
References 15 publications
0
5
0
Order By: Relevance
“…The agent increased its rewards and steps just by staying at one place and rotating indefinitely. Also, while comparing our work with [18], we noted that the training times using the Guidance Training algorithm were a lot larger and that it took days to train for a 1000 episodes. In contrast, the SAM model could be trained for 8k episodes in a few hours.…”
Section: Training Resultsmentioning
confidence: 95%
See 2 more Smart Citations
“…The agent increased its rewards and steps just by staying at one place and rotating indefinitely. Also, while comparing our work with [18], we noted that the training times using the Guidance Training algorithm were a lot larger and that it took days to train for a 1000 episodes. In contrast, the SAM model could be trained for 8k episodes in a few hours.…”
Section: Training Resultsmentioning
confidence: 95%
“…However, the use of attention based mechanisms including transformers have been used to replace LSTMs and GRUs due to their inherent limitations [27]. Moreover, [18] developed an autonomous multi-rotor flight DRL algorithm that enabled collision avoidance under novel scenarios while encouraging exploration. The algorithm used depth images as input while flying across an enclosed space with the intention of objective the of maximizing the number of steps taken before a crash.…”
Section: Collision Avoidance Using Reinforcement Learningmentioning
confidence: 99%
See 1 more Smart Citation
“…Although the algorithm has good performance, constructing numerous real world-like environments (Sadeghi & Levine, 2016) requires significant efforts and time. He et al (2020); Roghair et al (2021) use real-like virtual simulators to train the obstacle avoidance algorithms but they are not evaluated in the real environments.…”
Section: Journal Pre-proofmentioning
confidence: 99%
“…Especially in the computer vision field, the level of situation awareness based on visual information has been dramatically improved. For example, AlexNet (Krizhevsky et al, 2012) has significantly improved the object classification performance by intro-J o u r n a l P r e -p r o o f vironments (e.g., depth images) during the training (Loquercio et al, 2021;Ramezani Dooraki & Lee, 2018;Wu et al, 2018), or use virtual simulators that are similar to real environments (He et al, 2020;Roghair et al, 2021;Sadeghi & Levine, 2016). Ahn & Song (2020) trained the robot arm grasping policy in the simulation using a vision sensor and deploy the learned policy in the physical worlds with additional training in real-world environments.…”
Section: Introductionmentioning
confidence: 99%