Support vector machine (SVM) is regarded as one of the most effective techniques for supervised learning, while the Gaussian kernel SVM is widely utilized due to its excellent performance capabilities. To ensure high performance of models, hyperparameters, i.e. kernel width and penalty factor must be determined appropriately. This paper studies the influence of hyperparameters on the Gaussian kernel SVM when such hyperparameters attain an extreme value (0 or ∞ ). In order to improve computing efficiency, a parameter optimization method based on the local density and accuracy of Leave-One-Out method are proposed. Kernel width of each sample is determined based on the local density needed to ensure a higher separability in feature space while the penalty parameter is determined by an improved grid search using the Leave-One-Out method.A comparison with grid method is conducted to verify validity of the proposed method. The classification accuracy of five real-life datasets from UCI database are 0.9733, 0.9933, 0.7270, 0.6101 and 0.8867, which is slightly superior to the grid method. The results also demonstrate that this proposed method is computationally cheaper by 1 order of magnitude when compared to the grid method.