This paper is devoted to
H
∞
consensus design and online scheduling for homogeneous multiagent systems (MASs) with switching topologies via deep reinforcement learning. The model of homogeneous MASs with switching topologies is established based on switched systems theory, in which the switching of topologies is viewed as the switching among subsystems. By employing linear transformation, the closed-loop systems of MASs are converted into reduced-order systems. The problem of
H
∞
consensus design can be transformed to the issue of
H
∞
control. It is supposed that the consensus protocol is composed of two parts: dynamics-based protocol and learning-based protocol, where dynamics-based protocol is provided to guarantee the convergence and weighted attenuation and learning-based protocol is proposed to improve the transient performance. Then, the multiple Lyapunov function (MLF) method and mode-dependent average dwell time (MDADT) method are combined to ensure the stability and weighted
H
∞
disturbance attenuation index of reduced-order systems. The sufficient existing conditions of dynamics-based protocol are given through the feasible solutions of linear matrix inequalities (LMIs). Moreover, the online scheduling is formulated as a Markov decision process, and the deep deterministic policy gradient (DDPG) algorithm in the framework of actor-critic is proposed for the compensation of disturbance to explore optimal control policy. The online scheduling of parameters of MASs is viewed as bounded compensation of dynamics-based protocol, whose stability can be guaranteed by nonfragile control theory. Finally, simulation results are provided to illustrate the effectiveness and superiority of the proposed method.