Training accurate and explainable models in limited time has been a longstanding challenge for classification problems. One of the most popular techniques to cope with this challenge is support vector machine (SVM). Compared with SVM, least squares support vector machine (LSSVM) is smooth and more robust. However, LSSVM suffers from poor predictive performance and high time complexity when training large-scale and high-dimensional datasets. In this paper, we introduce a sparse accelerated limited memory Broyden–Fletcher–Goldfarb–Shanno (SAL-BFGS) algorithm which tackles this problem in two aspects: (1) We reformulate the LSSVM as a new classifier model with objective function with respect to $\ell_{0}$-norm function. (2) We propose a Nesterov's accelerated limited memory BFGS to train the classifier. Both improvements can remarkably enhance predictive accuracy and reduce computational cost of the classifier. Compared with L2SAAG, L1VRSGD, SL-BFGS, SAL-BFGS improves meidian values of TCA by an average of 6.32\%, 5.63\%, 5.44\% on twelve real-world datasets. Experimental results demonstrate the superiority of the proposed algorithm.