2021 IEEE 3rd International Conference on Frontiers Technology of Information and Computer (ICFTIC) 2021
DOI: 10.1109/icftic54370.2021.9647298
|View full text |Cite
|
Sign up to set email alerts
|

Collaborative Coverage Path Planning of UAV Cluster based on Deep Reinforcement Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
7
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 13 publications
(7 citation statements)
references
References 9 publications
0
7
0
Order By: Relevance
“…The author in [4] optimize the data transmission mode and the trajectory of two UAVs to improve the performance of the communication system. Additionally, [5] investigate the optimal path to travel all points in a target area under time and energy constraints. Different from the above, this paper investigates an obstacles avoidance route from the start point to the destination for each UAV to ensure the completion of the distribution task as a premise.…”
Section: A Problem Statementmentioning
confidence: 99%
“…The author in [4] optimize the data transmission mode and the trajectory of two UAVs to improve the performance of the communication system. Additionally, [5] investigate the optimal path to travel all points in a target area under time and energy constraints. Different from the above, this paper investigates an obstacles avoidance route from the start point to the destination for each UAV to ensure the completion of the distribution task as a premise.…”
Section: A Problem Statementmentioning
confidence: 99%
“…DRL-based approaches can tackle CPP problems directly and without the need for previous decomposition but are generally underexplored in the context of autonomous UAV coverage missions. One of the rare examples is given in [13], where a Q-learning-based algorithm is proposed for coordinating multiple UAVs on a coverage mission. In contrast to the scenario we investigate, the coverage mission is set in an environment not requiring obstacle avoidance or recharge.…”
Section: Related Workmentioning
confidence: 99%
“…CPP is a fundamental problem with broad applications, holding the potential to address numerous existing challenges across various domains. However, only a few papers investigated DRL-based methods specifically tailored to the complexities of the UAV coverage problem [11]- [13], or investigated the general applicability of DRL for ground-based CPP [14]- [20]. In this work, we present several contributions to address these gaps:…”
Section: Introductionmentioning
confidence: 99%
“…A full-coverage path planning algorithm based on Q-Learning is proposed in [23], which optimizes coverage paths using raster graphs. Robot reward and punishment mechanism in DQN (Deep Q-Network) is proposed in Ref [24] for fullcoverage of UAV. Jin et al [25] use reinforcement learning to achieve coverage of three-dimensional object representation, and demonstrate that ϵ-greedy strategy is better than pure greedy strategy.…”
Section: Introductionmentioning
confidence: 99%