Ahstract-In our previous work we have developed an active set training method of L2 support vector machines (SVMs) using Newton's method. Because the method allows a solution to be infeasible during training, convergence of the method is not guaranteed. In this paper, we guarantee convergence of active set training by limiting the corrections under the constraints when slow convergence is detected. Namely, we start training the L2 SVM with a subset of training data, delete non-positive dual variables from the working set as well as the variables with margins larger than or equal to 1, add violating variables to the working set, and repeat training. We monitor the number of violation fluctuations and if it exceeds a specified value, we obtain a feasible solution prohibiting addition of violating variables. Then for a feasible solution, we start active set training limiting the corrections within a feasible solution.By computer experiments, we show that the proposed training method is faster and more stable than the previous method.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.