2017
DOI: 10.1016/j.artint.2014.11.009
|View full text |Cite
|
Sign up to set email alerts
|

Automated aerial suspended cargo delivery through reinforcement learning

Abstract: Cargo-bearing unmanned aerial vehicles (UAVs) have tremendous potential to assist humans by delivering food, medicine, and other supplies. For time-critical cargo delivery tasks, UAVs need to be able to quickly navigate their environments and deliver suspended payloads with bounded load displacement. As a constraint balancing task for joint UAV-suspended load system dynamics, this task poses a challenge. This article presents a reinforcement learning approach for aerial cargo delivery tasks in environments wit… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

1
67
0
3

Year Published

2017
2017
2023
2023

Publication Types

Select...
6
2

Relationship

1
7

Authors

Journals

citations
Cited by 139 publications
(82 citation statements)
references
References 24 publications
1
67
0
3
Order By: Relevance
“…Since manually accounting for all possibilities is often infeasible, sampling-based, learning-based, and other intelligent methods are the norm [18]. Reinforcement learning (RL), in particular, has been successful for robotic task learning [15] in several problems such as table tennis [21], swing-free UAV delivery [7], and a self-driving car [12]. However, traditional RL methods do not handle continuous and high-dimensional state spaces well [11].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Since manually accounting for all possibilities is often infeasible, sampling-based, learning-based, and other intelligent methods are the norm [18]. Reinforcement learning (RL), in particular, has been successful for robotic task learning [15] in several problems such as table tennis [21], swing-free UAV delivery [7], and a self-driving car [12]. However, traditional RL methods do not handle continuous and high-dimensional state spaces well [11].…”
Section: Introductionmentioning
confidence: 99%
“…Because the method learns and performs the task in the feature space, such transfer is possible. Previously, we empirically showed that, using hand-crafted features, batch RL learns in small spaces and acts on larger problems [8], [7], but did not address when learning transfer is possible and how to do it for an arbitrary problem. This paper formalizes the feature selection and the conditions under which transfer is possible so that it can be applied for classes of PBTs.…”
Section: Introductionmentioning
confidence: 99%
“…Papers [13] and [16] also deal with learning, but put a strong emphasis on managing the complexity of doing so, thus addressing RC3. [13] introduces a scalable methodology to learn and transfer knowledge of the transition (and reward) models for model-based reinforcement learning in a complex world.…”
Section: Papers Focusing On Ca1: Robots That Knowmentioning
confidence: 99%
“…The authors use a formulation of Markov decision processes that support efficient online-learning of relevant problem features in order to approximate world dynamics. [16] presents a reinforcement learning approach for aerial cargo delivery tasks in environments with static obstacles. The authors plan and create swing-free trajectories with bounded load displacements.…”
Section: Papers Focusing On Ca1: Robots That Knowmentioning
confidence: 99%
“…Reinforcement learning has already been applied to quadrotors with a slung load. 3,[12][13][14] All these studies use a model of the system dynamics. In this paper, a two-dimensional version of this slung load problem is used with a model-free reinforcement learning controller.…”
Section: Introductionmentioning
confidence: 99%