Search citation statements
Paper Sections
Citation Types
Year Published
Publication Types
Relationship
Authors
Journals
As renewable energy sources become more integrated into the power grid, traditional virtual synchronous generator (VSG) control strategies have become inadequate for the current low-damping, low-inertia power systems. Therefore, this paper proposes a VSG inertia and damping adaptive control method based on multi-agent deep deterministic policy gradient (MADDPG). The paper first introduces the working principles of virtual synchronous generators and establishes a corresponding VSG model. Based on this model, the influence of variations in virtual inertia (J) and damping (D) coefficients on fluctuations in active power output is examined, defining the action space for J and D. The proposed method is mainly divided into two phases: “centralized training and decentralized execution”. In the centralized training phase, each agent’s critic network shares global observation and action information to guide the actor network in policy optimization. In the decentralized execution phase, agents observe frequency deviations and the rate at which angular frequency changes, using reinforcement learning algorithms to adjust the virtual inertia J and damping coefficient D in real time. Finally, the effectiveness of the proposed MADDPG control strategy is validated through comparison with adaptive control and DDPG control methods.
As renewable energy sources become more integrated into the power grid, traditional virtual synchronous generator (VSG) control strategies have become inadequate for the current low-damping, low-inertia power systems. Therefore, this paper proposes a VSG inertia and damping adaptive control method based on multi-agent deep deterministic policy gradient (MADDPG). The paper first introduces the working principles of virtual synchronous generators and establishes a corresponding VSG model. Based on this model, the influence of variations in virtual inertia (J) and damping (D) coefficients on fluctuations in active power output is examined, defining the action space for J and D. The proposed method is mainly divided into two phases: “centralized training and decentralized execution”. In the centralized training phase, each agent’s critic network shares global observation and action information to guide the actor network in policy optimization. In the decentralized execution phase, agents observe frequency deviations and the rate at which angular frequency changes, using reinforcement learning algorithms to adjust the virtual inertia J and damping coefficient D in real time. Finally, the effectiveness of the proposed MADDPG control strategy is validated through comparison with adaptive control and DDPG control methods.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.