2017
DOI: 10.48550/arxiv.1706.09829
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Towards Monocular Vision based Obstacle Avoidance through Deep Reinforcement Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
55
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 41 publications
(55 citation statements)
references
References 14 publications
0
55
0
Order By: Relevance
“…Generally, these states are represented as depth prediction images and the size of the reward influences the strength of the signal for which the UAV will alter its behavior policy. For the problem of obstacle avoidance, [1] demonstrated that deep Q-learning as introduced by [15], is effective for robotic navigation in 2-Dimensions. Alternative approaches have been tried including work by Khan et al [22] provided a model-based approach that estimates the probability that the UAV would have a collision within an unknown environment, allowing for specific actions depending on the certainty of the prediction.…”
Section: Reinforcement Learning For Obstacle Avoidancementioning
confidence: 99%
See 3 more Smart Citations
“…Generally, these states are represented as depth prediction images and the size of the reward influences the strength of the signal for which the UAV will alter its behavior policy. For the problem of obstacle avoidance, [1] demonstrated that deep Q-learning as introduced by [15], is effective for robotic navigation in 2-Dimensions. Alternative approaches have been tried including work by Khan et al [22] provided a model-based approach that estimates the probability that the UAV would have a collision within an unknown environment, allowing for specific actions depending on the certainty of the prediction.…”
Section: Reinforcement Learning For Obstacle Avoidancementioning
confidence: 99%
“…One of the major challenges with autonomous motion planning is ensuring that an agent can efficiently explore a space while avoiding collision with objects in complex and dynamic environments. To resolve this, recent research has turned to applying deep reinforcement learning techniques to robotics and UAVs [1], [2].…”
Section: Introductionmentioning
confidence: 99%
See 2 more Smart Citations
“…More recently, the learning was applied in tasks in robotics as well, where it was initially used to handle tasks in stable and observable environments [3]. For mobile robotics, however, the complexity increases significantly given the interactions with barriers in the physical workplace [4,5]. In this context, Deep-RL ended up simplifying the problem by discretizing it [6].…”
Section: Introductionmentioning
confidence: 99%