2022
DOI: 10.3390/w14233851
|View full text |Cite
|
Sign up to set email alerts
|

Water Quality Predictions Based on Grey Relation Analysis Enhanced LSTM Algorithms

Abstract: With the growth of industrialization in recent years, the quality of drinking water has been a great concern due to increasing water pollution from industries and industrial farming. Many monitoring stations are constructed near drinking water sources for the purpose of fast reactions to water pollution. Due to the relatively low sampling frequencies in practice, mathematic prediction models are clearly needed for such monitoring stations to reduce the delay between the time points of pollution occurrences and… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(3 citation statements)
references
References 32 publications
0
3
0
Order By: Relevance
“…XGBoost's optimization hyperparameters were learning rate (0.001, 0.01, 0.1), maximum tree depth (3,5,7,9), and number of trees used by the model (100, 200, 300, 400, 500). The hyperparameters for KF-GRU and KF-LSTM models included sequence structure in the Kalman filter, target quantity in the model (1), number of feature columns(4), hidden layer size (1,2,3,4), encoder-decoder layer numbers (1,2,3,4), dropout rate regularization size (0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8), encoder and decoder size (1,2,3,4), batch size (2,4,8,16,32,64,128), and learning rate (0.1, 0.01, 0.001). In the next section, we will discuss the performance of each model on the training and test sets.…”
Section: Model Evaluation Metricsmentioning
confidence: 99%
See 2 more Smart Citations
“…XGBoost's optimization hyperparameters were learning rate (0.001, 0.01, 0.1), maximum tree depth (3,5,7,9), and number of trees used by the model (100, 200, 300, 400, 500). The hyperparameters for KF-GRU and KF-LSTM models included sequence structure in the Kalman filter, target quantity in the model (1), number of feature columns(4), hidden layer size (1,2,3,4), encoder-decoder layer numbers (1,2,3,4), dropout rate regularization size (0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8), encoder and decoder size (1,2,3,4), batch size (2,4,8,16,32,64,128), and learning rate (0.1, 0.01, 0.001). In the next section, we will discuss the performance of each model on the training and test sets.…”
Section: Model Evaluation Metricsmentioning
confidence: 99%
“…From the formula, it can be seen that the learning rate and batch size are closely related and mutually influence the final effect of the model. We continuously varied the model parameters batch size (2,4,8,16,32,64,128) and learning rate (0.1, 0.01, 0.001) over multiple experiments to obtain the results shown in Figure 12. We normalised the MAE results of each experiment to a range between 0 and 1 and mapped them to corresponding colours.…”
Section: Optimizer Selection and Parameter Optimizationmentioning
confidence: 99%
See 1 more Smart Citation