2020
DOI: 10.25126/jitecs.20205374
|View full text |Cite
|
Sign up to set email alerts
|

Comparison of Regression, Support Vector Regression (SVR), and SVR-Particle Swarm Optimization (PSO) for Rainfall Forecasting

Abstract: Rainfall is one of the factors that influence climate change in an area and is very difficult to predict, while rainfall information is very important for the community. Forecasting can be done using existing historical data with the help of mathematical computing in modeling. The Support Vector Regression (SVR) method is one method that can be used to predict non-linear rainfall data using a regression function. In calculations using the regression function, choosing the right SVR parameters is needed to prod… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(2 citation statements)
references
References 11 publications
0
2
0
Order By: Relevance
“…We use recent studies about water level classification, rainfall prediction, and flood prediction as references in this research. In the rainfall prediction using the SVM algorithm, the study shows that the best model produces a Root Mean Square Error (RMSE) value of 88.426 [6]. In flood prediction in Bangladesh using the K-NN algorithm, the study shows that the K-NN algorithm yields an average accuracy value of 94.91%, an average precision value of 92%, and an average recall value of 91% [7].…”
Section: Imentioning
confidence: 99%
See 1 more Smart Citation
“…We use recent studies about water level classification, rainfall prediction, and flood prediction as references in this research. In the rainfall prediction using the SVM algorithm, the study shows that the best model produces a Root Mean Square Error (RMSE) value of 88.426 [6]. In flood prediction in Bangladesh using the K-NN algorithm, the study shows that the K-NN algorithm yields an average accuracy value of 94.91%, an average precision value of 92%, and an average recall value of 91% [7].…”
Section: Imentioning
confidence: 99%
“…In solving classification problems using non-linearly separable datasets, this algorithm uses a kernel function (kernel trick) for mapping the data into a high-dimensional feature space to obtain a hyperplane that separates the data into two classes [30]. Some of the more popular kernel functions often used in SVM are the polynomial, RBF, and sigmoid functions; these kernel functions use equations ( 4) to (6) to generate a hyperplane in the classification process [31].…”
Section: Support Vector Machinementioning
confidence: 99%