This paper presents a new approach for compensating nonlinear errors of a load cell, based on the optimal neural network with an augmented Lagrange multiplier (ALMNN). The load cell has serious nonlinear errors, which lower the accuracy of the weighing results. In this proposed approach, first, we construct the constraints for training the neural network (NN) using the load cell's prior knowledge, i.e., the monotonic increasing property of a load cell's input-output function. Second, we create the augmented Lagrange function with a multiplier and a penalty factor to serve as an NN's performance index, and then deduce the ALMNN's detailed training algorithm. The penalty factor plays an important role in ALMNN, thus the reasonable range of numerical value is discussed in this paper. Meanwhile, the weighing principle of a strain gauge load cell is introduced and its nonlinear error model is established. Finally, the experimental results demonstrate the effectiveness of ALMNN. By comparing with the conventional data induction method (DINN, i.e., the training of NN not with the prior knowledge but only the data samples), ALMNN significantly improves the NN's generalization ability, especially in the case of lacking training samples. In addition, the applicability of ALMNN is verified by three additional experiments.Index Terms-First-order derivative constraint, Lagrange multiplier, load cell, monotonicity, neural networks (NNs), nonlinear error compensation.