This paper proposes a novel guidance law for intercepting a high-speed maneuvering target based on deep reinforcement learning, which mainly includes the interceptor–target relative motion model and value function approximation model based on deep Q-Network (DQN) with prioritized experience replay. First, a method called prioritized experience replay is applied to extract more efficient samples and reduce the training time. Second, to cope with the discrete action space of DQN, a normal acceleration is introduced to the state space, and the normal acceleration rate is chosen as the action. Then, the continuous normal acceleration command is obtained using numerical integral method. Third, to make the line-of-sight (LOS) rate converge rapidly, the reward function whose absolute value tends to zero has been constructed. Finally, compared with proportional navigation guidance (PNG) and the Q-Learning-based guidance law (QLG), the simulation experiments are implemented to intercept high-speed maneuvering targets at different acceleration policies. Simulation results demonstrate that the proposed DQN-based guidance law (DQNG) can obtain continuous acceleration command, make the LOS rate converge to zero rapidly, and hit the maneuvering targets using only the LOS rate. It also confirms that DQNG can realize the parallel-like approach and improve the interception performance of the interceptor to high-speed maneuvering targets. The proposed DQNG also has the advantages of avoiding the complicated formula derivation of traditional guidance law and eliminates the acceleration buffeting.
We propose a steady-state aerodynamic data-driven method to predict the incompressible flow around airfoils of NACA (National Advisory Committee for Aeronautics) 0012-series. Using the Signed Distance Function (SDF) to parameterize the geometric and flow condition setups, the prediction core of the method is constructed essentially by a consecutive framework of a convolutional neural network (CNN) and a deconvolutional neural network (DCNN). Impact of training parameters on the behavior of the proposed CNN-DCNN model is studied, so that appropriate learning rate, mini-batch size, and random deactivation rate are specified. Tested by “unseen” airfoil geometries and far-field velocities, it is found that the prediction process is three orders of magnitudes faster than a corresponding Computational Fluid Dynamics (CFD) simulation, while relative errors are maintained lower than 1% on most of the sample points. The proposed model manages to capture the essential dynamics of the flow field, as its predictions correspond reasonably with the reconstructed field by proper orthogonal decomposition (POD). The performance and accuracy of the proposed model indicate that the deep learning-based approach has great potential as a robust predictive tool for aerodynamic design and optimization.
In this paper, an intelligent guidance law based on Deep Q Network (DQN) algorithm is proposed, for enabling the missile to intercept different maneuvering targets following the idea of the parallel-approach method. In specific, we propose the inverse ratio of the absolute value of line-of-sight (LOS) angle rate as the shaping reward function which guarantees the successful finding of the control strategy and the speeding up of the training process of the reinforcement learning (RL) model. Furthermore, to avoid rapid chattering caused by directly choosing missile acceleration as an action, we introduce the change rate of the acceleration as the action in the DQN algorithm and integrate it to obtain the acceleration command. Therefore, in our algorithm only LOS angle, LOS angle rate, and missile overload information are used in the established RL model to generate the guidance command, which is easy to implement. The simulation results and comparative experiments demonstrate that the proposed RL based guidance method achieves better guidance accuracy and higher success rate. The great performance of the proposed method suggests that the RL based guidance method is promising for the maneuvering target, and deserve further investigations in future.
In order to deal with the problem of single-phase earthing fault in resonant grounded system, a new method of fault line and segment selection was proposed using zero-sequence current increment through analyzing the line fault on substation outlet and the change of the fault component of zero-sequence current. The zero-sequence current in the line will change when the inductance of arc-suppression coil is changed. The method determines the fault location based on the characteristic of the change in the line. The effects of resistance grounding can be eliminated by converting zero-sequence currents to the same voltage. Matlab simulation experiments verify the correctness and validity of the method
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.