This paper focuses on the economic power dispatch (EPD) operation of a microgrid in an OPAL-RT environment. First, a long short-term memory (LSTM) network is proposed to forecast the load information of a microgrid to determine the output of a power generator and the charging/discharging control strategy of a battery energy storage system (BESS). Then, a deep reinforcement learning method, the deep deterministic policy gradient (DDPG), is utilized to develop the power dispatch of a microgrid to minimize the total energy expense while considering power constraints, load uncertainties and electricity price. Moreover, a microgrid built in Cimei Island of Penghu Archipelago, Taiwan, is investigated to examine the compliance with the requirements of equality and inequality constraints and the performance of the deep reinforcement learning method. Furthermore, a comparison of the proposed method with the experience-based energy management system (EMS), Newton particle swarm optimization (Newton-PSO) and the deep Q-learning network (DQN) is provided to evaluate the obtained solutions. In this study, the average deviation of the LSTM forecast accuracy is less than 5%. In addition, the daily operating cost of the proposed method obtains a 3.8% to 7.4% lower electricity cost compared to that of the other methods. Finally, a detailed emulation in the OPAL-RT environment is carried out to validate the effectiveness of the proposed method.