Mixed zero-sum games consider both zero-sum and non-zero-sum differential game problems simultaneously. In this paper, multiplayer mixed zero-sum games (MZSGs) are studied by the means of an integral reinforcement learning (IRL) algorithm under the dynamic event-triggered control (DETC) mechanism for completely unknown nonlinear systems. Firstly, the adaptive dynamic programming (ADP)-based on-policy approach is proposed for solving the MZSG problem for the nonlinear system with multiple players. Secondly, to avoid using dynamic information of the system, a model-free control strategy is developed by utilizing actor–critic neural networks (NNs) for addressing the MZSG problem of unknown systems. On this basis, for the purpose of avoiding wasted communication and computing resources, the dynamic event-triggered mechanism is integrated into the integral reinforcement learning algorithm, in which a dynamic triggering condition is designed to further reduce triggering times. With the help of the Lyapunov stability theorem, the system states and weight values of NNs are proven to be uniformly ultimately bounded (UUB) stable. Finally, two examples are demonstrated to show the effectiveness and feasibility of the developed control method. Compared with static event-triggering mode, the simulation results show that the number of actuator updates in the DETC mechanism has been reduced by 55% and 69%, respectively.