2015 International Joint Conference on Neural Networks (IJCNN) 2015
DOI: 10.1109/ijcnn.2015.7280664
|View full text |Cite
|
Sign up to set email alerts
|

Effectiveness of Random Search in SVM hyper-parameter tuning

Abstract: mental results show that the predictive performance of models using Random Search is equivalent to those obtained using meta heuristics and Grid Search, but with a lower computational cost.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
56
0

Year Published

2016
2016
2023
2023

Publication Types

Select...
7
2
1

Relationship

1
9

Authors

Journals

citations
Cited by 104 publications
(56 citation statements)
references
References 21 publications
0
56
0
Order By: Relevance
“…The hyperparameters of LSTM, CNN-SLTM and the proposed SWT-LSTM models are optimized and listed in Table 1. The C and γ values for the SVR model are automatically optimized using grid search algorithm [27]. The forecasting results comparison is performed using three well-known error metrics that are rootmean-square error (RMSE), mean absolute percentage error (MAPE) [33] and mean bias error (MBE) with different step lengths.…”
Section: Tablementioning
confidence: 99%
“…The hyperparameters of LSTM, CNN-SLTM and the proposed SWT-LSTM models are optimized and listed in Table 1. The C and γ values for the SVR model are automatically optimized using grid search algorithm [27]. The forecasting results comparison is performed using three well-known error metrics that are rootmean-square error (RMSE), mean absolute percentage error (MAPE) [33] and mean bias error (MBE) with different step lengths.…”
Section: Tablementioning
confidence: 99%
“…Rafael G. Mantovani et al [9] made an investigation on random search and grid search methods. They aimed to tune the hyperparameters of the classifier called Support Vector Machine (SVM).…”
Section: Related Workmentioning
confidence: 99%
“…However, as this method searches all possible combinations of hyperparameters, it requires large computational time and memory [36]. RS is to find the optimal combination of hyperparameters by randomly choosing possible values of hyperparameters [37]. RS is more efficient than GS in seeking hyperparameters as only part of the possible solutions is tried.…”
Section: Implementation Of Optimization In Svr Modelsmentioning
confidence: 99%