This paper proposed a Deep Reinforcement learning (DRL) approach for Combined Heat and Power (CHP) system economic dispatch which is suitable for different operating scenarios and can significantly decrease the computational complexity without affecting accuracy. In the respect of problem description, a vast of Combined Heat and Power (CHP) economic dispatch problems are modeled as a high-dimensional and non-smooth objective function with a large number of non-linear constraints for which powerful optimization algorithms and considerable time are required to solve it. In order to reduce the solution time, most engineering applications choose to linearize the optimization target and devices model. To avoid complicated linearization process, this paper models CHP economic dispatch problems as Markov Decision Process (MDP) that making the model highly encapsulated to preserve the input and output characteristics of various devices. Furthermore, we improve an advanced deep reinforcement learning algorithm: distributed proximal policy optimization (DPPO), to make it applicable to CHP economic dispatch problem. Based on this algorithm, the agent will be trained to explore optimal dispatch strategies for different operation scenarios and respond to system emergencies efficiently. In the utility phase, the trained agent will generate optimal control strategy in real time based on current system state. Compared with existing optimization methods, the advantages of the proposed DRL methods are mainly reflected in the following three aspects: 1) Adaptability: under the premise of the same network topology, the trained agent can handle the economic scheduling problem in various operating scenarios without recalculation. 2) High encapsulation: The user only needs to input the operating state to get the control strategy, while the optimization algorithm needs to re-write the constraints and other formulas for different situations. 3) Time scale flexibility: It can be applied to both the day-ahead optimized scheduling and the real-time control. At the same time, we give a rigorous proof that the DRL method can converge to the optimal solution. To evaluate the performance of proposed economic dispatch algorithm, comprehensive numerical analysis is conducted. The result shows that the training time of our improved algorithm is 201 seconds and 318 seconds less than other two advanced DRL algorithm. And the difference on economic performance between this method and optimization methods is only 0.029%. If the wind power of the system is 0, the trained system can still find optimal dispatch strategy without re-training.
Autonomous motion planning (AMP) of unmanned aerial vehicles (UAVs) is aimed at enabling a UAV to safely fly to the target without human intervention. Recently, several emerging deep reinforcement learning (DRL) methods have been employed to address the AMP problem in some simplified environments, and these methods have yielded good results. This paper proposes a multiple experience pools (MEPs) framework leveraging human expert experiences for DRL to speed up the learning process. Based on the deep deterministic policy gradient (DDPG) algorithm, a MEP–DDPG algorithm was designed using model predictive control and simulated annealing to generate expert experiences. On applying this algorithm to a complex unknown simulation environment constructed based on the parameters of the real UAV, the training experiment results showed that the novel DRL algorithm resulted in a performance improvement exceeding 20% as compared with the state-of-the-art DDPG. The results of the experimental testing indicate that UAVs trained using MEP–DDPG can stably complete a variety of tasks in complex, unknown environments.
In this paper, a novel deep reinforcement learning (DRL) method, and robust deep deterministic policy gradient (Robust-DDPG), is proposed for developing a controller that allows robust flying of an unmanned aerial vehicle (UAV) in dynamic uncertain environments. This technique is applicable in many fields, such as penetration and remote surveillance. The learning-based controller is constructed with an actor-critic framework, and can perform a dual-channel continuous control (roll and speed) of the UAV. To overcome the fragility and volatility of original DDPG, three critical learning tricks are introduced in Robust-DDPG: (1) Delayed-learning trick, providing stable learnings, while facing dynamic environments; (2) adversarial attack trick, improving policy’s adaptability to uncertain environments; (3) mixed exploration trick, enabling faster convergence of the model. The training experiments show great improvement in its convergence speed, convergence effect, and stability. The exploiting experiments demonstrate high efficiency in providing the UAV a shorter and smoother path. While, the generalization experiments verify its better adaptability to complicated, dynamic and uncertain environments, comparing to Deep Q Network (DQN) and DDPG algorithms.
In deep reinforcement learning, network convergence speed is often slow and easily converges to local optimal solutions. For an environment with reward saltation, we propose a magnify saltatory reward (MSR) algorithm with variable parameters from the perspective of sample usage. MSR dynamically adjusts the rewards for experience with reward saltation in the experience pool, thereby increasing an agent’s utilization of these experiences. We conducted experiments in a simulated obstacle avoidance search environment of an unmanned aerial vehicle and compared the experimental results of deep Q-network (DQN), double DQN, and dueling DQN after adding MSR. The experimental results demonstrate that, after adding MSR, the algorithms exhibit a faster network convergence and can obtain the global optimal solution easily.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.