Neural networks are useful tools to solve mathematical and engineering problems. By using the implicit-explicit-θ method and the method proposed recently by Mohamad to discretize the continuous-time neural networks, we formulate two classes of discrete-time analogues to solve a system of variational inequalities. By adopting suitable Lyapunov functions and Razumikhintype techniques, exponential stability of the discrete neural networks are established in terms of linear matrix inequalities (LMIs). Several numerical experiments are performed to compare the convergence rates of the proposed discrete neural networks and it is shown that:(a) all of the discrete neural networks converge faster as the step size becomes larger; (b) the discrete neural networks derived by the semi-implicit Euler method performs best.