The increasing integration of intermittent renewable energy sources (RESs) poses great challenges to active distribution networks (ADNs), such as frequent voltage fluctuations. This paper proposes a novel ADN strategy based on multiagent deep reinforcement learning (MADRL), which harnesses the regulating function of switch state transitions for the realtime voltage regulation and loss minimization. After deploying the calculated optimal switch topologies, the distribution network operator will dynamically adjust the distributed energy resources (DERs) to enhance the operation performance of ADNs based on the policies trained by the MADRL algorithm. Owing to the model-free characteristics and the generalization of deep reinforcement learning, the proposed strategy can still achieve optimization objectives even when applied to similar but unseen environments. Additionally, integrating parameter sharing (PS) and prioritized experience replay (PER) mechanisms substantially improves the strategic performance and scalability. This framework has been tested on modified IEEE 33-bus, IEEE 118bus, and three-phase unbalanced 123-bus systems. The results demonstrate the significant real-time regulation capabilities of the proposed strategy.