2020
DOI: 10.1016/j.robot.2020.103594
|View full text |Cite
|
Sign up to set email alerts
|

Fixed-Wing UAVs flocking in continuous spaces: A deep reinforcement learning approach

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
28
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3
2

Relationship

3
6

Authors

Journals

citations
Cited by 58 publications
(28 citation statements)
references
References 34 publications
0
28
0
Order By: Relevance
“…To verify the feasibility and effectiveness of our proposed method, we build a UAV hardware-in-the-loop (HITL) real-time simulation system [27] and conduct the flight simulation experiments based on this system.…”
Section: Methodsmentioning
confidence: 99%
“…To verify the feasibility and effectiveness of our proposed method, we build a UAV hardware-in-the-loop (HITL) real-time simulation system [27] and conduct the flight simulation experiments based on this system.…”
Section: Methodsmentioning
confidence: 99%
“…Recently, many approaches have been developed to realize flocking navigation for multi-UAV systems. For example, Yan et al [7] considered the leader-followers flocking problem of fixed-wing UAVs in the context of deep reinforcement learning. The followers can always follow the leader closely.…”
Section: Introductionmentioning
confidence: 99%
“…Most of the previous studies either predefine the path of every UAV, or give the information of the leader to them [7][8][9][10][11], both of which are hard to realize in practice. Firstly, the mechanism of receiving the path information remotely from the ground station requires a communication device to be equipped on each UAV, which in turn burdens the data transmission load.…”
Section: Introductionmentioning
confidence: 99%
“…Unmanned aerial vehicle (UAV) flocking has also been a target for the application of deep reinforcement learning. Using simulation, a flocking controller was trained to control a follower's roll angle and velocity to keep a certain distance from a leader to avoid collisions [11]. In terms of deep reinforcement learning applied to control the attitude of aircraft, DDPG, trust region policy optimisation (TRPO [12]) and proximal policy optimisation (PPO [13]) algorithms have been used for quadrotors [14].…”
Section: Introductionmentioning
confidence: 99%