is article presents a general six-step discrete-time Zhang neural network (ZNN) for time-varying tensor absolute value equations. Firstly, based on the Taylor expansion theory, we derive a general Zhang et al. discretization (ZeaD) formula, i.e., a general Taylor-type 1-step-ahead numerical di erentiation rule for the rst-order derivative approximation, which contains two free parameters. Based on the bilinear transform and the Routh-Hurwitz stability criterion, the e ective domain of the two free parameters is analyzed, which can ensure the convergence of the general ZeaD formula. Secondly, based on the general ZeaD formula, we design a general six-step discrete-time ZNN (DTZNN) for time-varying tensor absolute value equations (TVTAVEs), whose steady-state residual error changes in a higher order manner than those presented in the literature. Meanwhile, the feasible region of its step size, which determines its convergence, is also studied. Finally, experiment results corroborate that the general six-step DTZNN model is quite e cient for TVTAVE solving.