In this paper, we propose automatic learning algorithms (called auto-kSVM-1p, autokSVM-dp) being possible to automatically tune hyper-parameters of k local support vector machines (SVM) for classifying large data sets. The autokSVM algorithms are able to determine the number of clusters k to partition the large training data, followed which they auto-learn the non-linear SVM model in each cluster to classify the data locally in the parallel way on multi-core computers. The autokSVM algorithms combine the grid search, the .632 bootstrap estimator, the hill climbing heuristic to optimize hyper-parameters in the local non-linear SVM training. The numerical test results on four data sets from UCI repository and three benchmarks of handwritten letters recognition showed that our autokSVM algorithms give very competitive classification results compared to the standard LibSVM and the original kSVM. An example of autokSVM-1p's effectiveness is given with an accuracy of 96.74% obtained in the classification of Forest cover-type data set having 581,012 datapoints in 54-dimensional input space and 7 classes in 334.45 seconds using a PC Intel(R) Core i7-4790 CPU, 3.6 GHz, 4 cores. Keywords Support vector machines • Large data sets • Local support vector machines • Automatic hyper-parameter tuning This article is part of the topical collection "Future Data and Security Engineering" guest edited by Tran Khanh Dang.