This paper focuses on the event-triggered optimal control problem of discrete-time nonlinear multiagent systems (MASs) via the reinforcement learning method. Distinguished from the existing consensus protocols for discrete-time multiagent systems under the ideal communication scenario, the considered model suffers from input saturation, unknown external disturbance, and denial-of-service (DoS) attacks. According to the approximation capability of the radial basis function neural network(RBF NN), the disturbance network is established to tackle the effect of the unknown disturbance on consensus issues. Subsequently, the composite controller is derived with the reinforcement learning strategy to deal with the DoS attacks, where the update frequency of the actor-critic networks and controllers is determined by the novel event-triggered mechanism. Finally, two simulations are formulated to verify the feasibility of the proposed method.