2023
DOI: 10.3390/app13127237
|View full text |Cite
|
Sign up to set email alerts
|

Sequential Data Processing for IMERG Satellite Rainfall Comparison and Improvement Using LSTM and ADAM Optimizer

Abstract: This study introduces a systematic methodology whereby different technologies were utilized to download, pre-process, and interactively compare the rainfall datasets from the Integrated Multi-Satellite Retrievals for Global Precipitation Mission (IMERG) satellite and rain gauges. To efficiently handle the large volume of data, we developed automated shell scripts for downloading IMERG data and storing it, along with rain gauge data, in a relational database system. Hypertext pre-processor (pHp) programs were b… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
0
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(2 citation statements)
references
References 85 publications
0
0
0
Order By: Relevance
“…The rationale behind its adoption is the optimizer's renowned capability for adaptive learning rates for each parameter. By leveraging moment estimates of the gradients, ADAM provides a more sophisticated and efficient trajectory in the parameter space, thereby potentially accelerating convergence [78,79]. Known for its effectiveness, this optimizer significantly enhances the model's precision.…”
Section: Ann Model With Relu Activation and Adam Optimizer (Relad-ann)mentioning
confidence: 99%
“…The rationale behind its adoption is the optimizer's renowned capability for adaptive learning rates for each parameter. By leveraging moment estimates of the gradients, ADAM provides a more sophisticated and efficient trajectory in the parameter space, thereby potentially accelerating convergence [78,79]. Known for its effectiveness, this optimizer significantly enhances the model's precision.…”
Section: Ann Model With Relu Activation and Adam Optimizer (Relad-ann)mentioning
confidence: 99%
“…After tuning the parameters through cross-validation, set the batch size to 2048, the negative sampling rate to 300, the learning rate lr = 10 −3 , the regularization coefficient λ = 10 −4 , the weight coefficient w = 0.5, α = 1.1, β = 1, and the hyperparameter w 1 = 1, w 2 = 0.01, and w 3 = 0.01. The Adam [36] algorithm is used to optimize the model parameters. The weight coefficients η 1 and η 2 of the higher-order social friend influence and community neighbor influence can be learned in the training model.…”
Section: Parameter Settingsmentioning
confidence: 99%