Ahstract-In our previous work we have developed an active set training method of L2 support vector machines (SVMs) using Newton's method. Because the method allows a solution to be infeasible during training, convergence of the method is not guaranteed. In this paper, we guarantee convergence of active set training by limiting the corrections under the constraints when slow convergence is detected. Namely, we start training the L2 SVM with a subset of training data, delete non-positive dual variables from the working set as well as the variables with margins larger than or equal to 1, add violating variables to the working set, and repeat training. We monitor the number of violation fluctuations and if it exceeds a specified value, we obtain a feasible solution prohibiting addition of violating variables. Then for a feasible solution, we start active set training limiting the corrections within a feasible solution.By computer experiments, we show that the proposed training method is faster and more stable than the previous method.