Support vector machines (SVMs) are one of the most commonly used models for classification problems in machine learning. Nowadays there is an important scenario that many different parties jointly perform SVM training by integrating their individual data, while at the same time it is required that privacy of data can be preserved. At present there are three main routes to achieving privacy-preserving SVM. First, all parties jointly generate kernel matrices privately and then use them for remaining training (e.g. Yu et al. 2006). Second, based on the first route, an additional randomization is adopted to randomize kernel matrices in order to (heuristically) hide information exposed by kernel matrices (e.g. Mangasarian et al. 2008). Third, also the securest one, all parties run MPC protocols for computing whole optimization algorithms privately (not merely the generation of kernel matrices as the first two routes do) (e.g. Liu et al. 2018 andWang et al. 2020).In this paper we propose a new efficient privacy-preserving SVM protocol in the third route that privately realizes the gradient descent method to optimize SVM and its security is proven in the semi-honest model. Our protocol admits the following advantages.-The protocol is of flexible deployment. It supports the deployment of arbitrarily multiple servers and multiple clients. -The protocol can tolerate dropping-out of some servers.-The protocol admits the ability of malicious-error-message correction (which is actually beyond the semi-honest security). If a small number of messages are corrupted, it can still recover correct messages as desired. We remark that none of the above advantages can be obtained by some known work. Moreover, when compared to the privacy-preserving SVM by Liu et al. 2018 andWang et al. 2020, our protocol achieves higher efficiency. We implement our protocol in Python and the experiments verify its efficiency.