In this paper, a novel distributed gradient neural network (DGNN) with predefined-time convergence is proposed to solve consensus problems widely existing in multi-agent systems. Compared with previous gradient neural networks (GNNs) for optimization and computation, the proposed DGNN model works in a non-fully connected way, of which each neuron only needs the information of neighbor neurons to converge to the equilibrium point. The convergence and asymptotic stability of the DGNN model are proved according to the Lyapunov theory. In addition, based on a relatively loose condition, three novel nonlinear activation functions are designed to speedup the DGNN model to predefined-time convergence, which is proved by rigorous theory. Computer numerical results further verify the effectiveness, especially the predefined-time convergence, of the proposed nonlinearly activated DGNN model to solve various consensus problems of multi-agent systems. Finally, a practical case of the directional consensus is presented to show the feasibility of the DGNN model and a corresponding connectivitytesting example is given to verify the influence on the convergence speed.