This article addresses adaptive event‐triggered discrete‐time nonlinear Markov jump systems (MJs) with DoS attacks, where the introduced DoS attacks are considered as more general stochastic models with fixed trigger frequency and duration. To solve the optimal control problem, we use an adaptive dynamic programming (ADP) algorithm based on policy iteration (PI). The approximate process is as follows: the performance index function (PIF) is first updated by the iteration policy in advance, and the control policy is obtained from the PIF. Subsequently, an approximate estimation of the optimal PIF and the optimal control policy is made using the actor‐critic structure obtained through neural network techniques. In order to reduce the occupied communication resources required for control policy iteration, we introduce an adaptive event triggering mechanism with an adaptive triggering threshold, which reduces the conservatism of resource occupation by the PIF compared to the fixed‐threshold ETM. In addition, an observer identifying the unknown dynamics part of the system is designed. Finally, using the Lyapunov function, it is shown that the designed control policy ensures the stability and convergence of the MJS, and the designed observer is effective. Simulation examples are given to verify the feasibility of the controller and the observer.