In current power grids, a massive amount of power equipment raises various emerging requirements, e.g., data perception, information transmission, and real-time control. The existing cloud computing paradigm is stubborn to address issues and challenges such as rapid response and local autonomy. Microgrids contain diverse and adjustable power components, making the power system complex and difficult to optimize. The existing traditional adjusting methods are manual and centralized, which requires many human resources with expert experience. The adjustment method based on edge intelligence can effectively leverage ubiquitous computing capacities to provide distributed intelligent solutions with lots of research issues to be reckoned with. To address this challenge, we consider a power control framework combining edge computing and reinforcement learning, which makes full use of edge nodes to sense network state and control power equipment to achieve the goal of fast response and local autonomy. Additionally, we focus on the non-convergence problem of power flow calculation, and combine deep reinforcement learning and multi-agent methods to realize intelligent decisions, with designing the model such as state, action, and reward. Our method improves the efficiency and scalability compared with baseline methods. The simulation results demonstrate the effectiveness of our method with intelligent adjusting and stable operation under various conditions.