2018
DOI: 10.1007/978-3-030-01057-7_85
|View full text |Cite
|
Sign up to set email alerts
|

Application of Deep Reinforcement Learning to UAV Fleet Control

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 8 publications
(8 citation statements)
references
References 1 publication
0
6
0
Order By: Relevance
“…Examples of the scenarios include robot team navigation (Corke et al, 2005), smart grid operation (Dall'Anese et al, 2013), and control of mobile sensor networks (Cortes et al, 2004). Here we choose unmanned aerial vehicles (UAVs) (Yang and Liu, 2018;Pham et al, 2018;Tožička et al, 2018;Shamsoshoara et al, 2019;Cui et al, 2019;Qie et al, 2019), a recently surging application scenario of multi-agent autonomous systems, as one representative example. Specifically, a team of UAVs are deployed to accomplish a cooperation task, usually without the coordination of any central controller, i.e., in a decentralized fashion.…”
Section: Cooperative Setting Unmanned Aerial Vehiclesmentioning
confidence: 99%
See 1 more Smart Citation
“…Examples of the scenarios include robot team navigation (Corke et al, 2005), smart grid operation (Dall'Anese et al, 2013), and control of mobile sensor networks (Cortes et al, 2004). Here we choose unmanned aerial vehicles (UAVs) (Yang and Liu, 2018;Pham et al, 2018;Tožička et al, 2018;Shamsoshoara et al, 2019;Cui et al, 2019;Qie et al, 2019), a recently surging application scenario of multi-agent autonomous systems, as one representative example. Specifically, a team of UAVs are deployed to accomplish a cooperation task, usually without the coordination of any central controller, i.e., in a decentralized fashion.…”
Section: Cooperative Setting Unmanned Aerial Vehiclesmentioning
confidence: 99%
“…Hence, the MADDPG algorithm proposed in Lowe et al ( 2017) is adopted, with centralized-learning-decentralized-execution. Two other tasks that can be tackled by MARL include resource allocation in UAV-enabled communication networks, using Q-learning based method (Cui et al, 2019), aerial surveillance and base defense in UAV fleet control, using policy optimization method in a purely centralized fashion (Tožička et al, 2018).…”
Section: Cooperative Setting Unmanned Aerial Vehiclesmentioning
confidence: 99%
“…Furthermore, [108] formulates resource allocation in a downlink communication network as a SG and solved it using independent Q-learning. The work in [109] applies MARL for fleet control, particularly, aerial surveillance and base defense in a fully centralized fashion.…”
Section: B Marl For Uav-assisted Wireless Communicationsmentioning
confidence: 99%
“…Next, take task point 17 as an example. Before the inspection of task point 17, UAV1, UAV2, and UAV3 were assigned patrol sequences (2,5,6,8,1,12,13,14), (11,15,10,20), and (18, 4, 7, 9, 19, 3) respectively. After inspecting task point 14, the total time consumption of UAV1 reached 55.006.…”
Section: : If (T Kmentioning
confidence: 99%