2022
DOI: 10.3390/app12031618
|View full text |Cite
|
Sign up to set email alerts
|

UAV-Cooperative Penetration Dynamic-Tracking Interceptor Method Based on DDPG

Abstract: The multi-UAV system has stronger robustness and better stability in combat. Therefore, the collaborative penetration of UAVs has been extensively studied in recent years. Compared with general static combat scenes, the dynamic tracking and interception of equipment penetration are more difficult to achieve. To realize the coordinated penetration of the dynamic-tracking interceptor by the multi-UAV system, the intelligent UAV model is established by using the deep deterministic policy-gradient algorithm, and t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5

Relationship

1
4

Authors

Journals

citations
Cited by 7 publications
(3 citation statements)
references
References 19 publications
0
3
0
Order By: Relevance
“…(2) Based on the reinforcement learning network. For example, Yuxie Luo [12] established an intelligent UAV model using the reinforcement learning network and built a reward function using the cooperative parameters of multiple UAVs to guide UAVs to conduct collaborative penetration; Yue Li [13] used reinforcement learning algorithms for training in four scenarios, frontal attack, escape, pursuit, and energy storage, thereby improving the intelligent decision-making level of air confrontation; Kaifang Wan [14] proposed a motion control method based on deep reinforcement learning (DRL), which provides additional flexibility for UAV penetration within the DRL framework; Liang Li [15] mainly focused on the winning region of three players in the reconnaissance penetration game, proposed an explicit policy method for analyzing and constructing barriers, and provided a complete solution by integrating the games of kind and degree.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…(2) Based on the reinforcement learning network. For example, Yuxie Luo [12] established an intelligent UAV model using the reinforcement learning network and built a reward function using the cooperative parameters of multiple UAVs to guide UAVs to conduct collaborative penetration; Yue Li [13] used reinforcement learning algorithms for training in four scenarios, frontal attack, escape, pursuit, and energy storage, thereby improving the intelligent decision-making level of air confrontation; Kaifang Wan [14] proposed a motion control method based on deep reinforcement learning (DRL), which provides additional flexibility for UAV penetration within the DRL framework; Liang Li [15] mainly focused on the winning region of three players in the reconnaissance penetration game, proposed an explicit policy method for analyzing and constructing barriers, and provided a complete solution by integrating the games of kind and degree.…”
Section: Related Workmentioning
confidence: 99%
“…When the UAV hits the target successfully, it is considered to have completed hit subtask m hit , so the corresponding subtask m t is shown as Formula (12).…”
Section: Task Completion Divisionmentioning
confidence: 99%
“…Li et al [7] adopted the Gaussian pseudo-spectral method to realize the time coordination strategy of reusable launch vehicles (RLV) and satisfy the no-fly constraints at the same time. However, the Due to the huge benefits of time cooperation, time cooperation technology has received worldwide attention from many studies [1][2][3]. Yu et al [4] designed a two-stage strategy that the attack angle cooperation of multi-vehicles is realized in the first stage, and the attack time cooperation of multi-vehicles is achieved in the final stage.…”
Section: Introductionmentioning
confidence: 99%