Synthetic aperture radar automatic target recognition (SAR ATR) uses computer processing capabilities to infer the classes of the targets without human intervention. For SAR ATR, deep learning gradually emerges as a powerful tool and achieves promising performance. However, it faces serious challenges of how to deal with incremental recognition scenarios. The existing deep learning-based SAR ATR methods usually predefine the total number of recognition classes. In realistic applications, the new tasks/classes will be added continuously. If all old data are stored and mixed with newly added data to update the model, the storage pressure and time consumption make the application infeasible. In this article, the high plastic error correction incremental learning (HPecIL) is proposed to address the model degradation and plasticity decline in the incremental scenario. Multiple optimal models trained on old tasks are used to correct accumulative errors and alleviate model degradation. Moreover, the sharp data distribution shift due to newly added data can also result in the model underperforming. A classbalanced training batch is constructed to deal with the issue of unbalanced data distribution. To make a trade-off between model stability and model plasticity, low-effect nodes in the model are removed to boost the efficiency of model update. The proposed HPecIL outperforms the other state-of-the-art methods in incremental recognition scenarios. The experimental results demonstrate the effectiveness of the proposed method.