2022
DOI: 10.3390/agronomy12030594
|View full text |Cite
|
Sign up to set email alerts
|

Daily Prediction and Multi-Step Forward Forecasting of Reference Evapotranspiration Using LSTM and Bi-LSTM Models

Abstract: Precise forecasting of reference evapotranspiration (ET0) is one of the critical initial steps in determining crop water requirements, which contributes to the reliable management and long-term planning of the world's scarce water sources. This study provides daily prediction and multi-step forward forecasting of ET0 utilizing a long short-term memory network (LSTM) and a bi-directional LSTM (Bi-LSTM) model. For daily predictions, the LSTM model's accuracy was compared to that of other artificial intelligence-… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
13
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 38 publications
(13 citation statements)
references
References 121 publications
0
13
0
Order By: Relevance
“…In other words, MLR aims to find the linear function that minimizes the sum of the squares of errors (SSE) between the observed and the predicted data. An advantage of this method is the easy interpretation of the coefficients, which are generated in the model with low computational effort, in comparison to more complex techniques, such as energy balance methods and artificial intelligence algorithms [13][14][15][16][17][18][19][20][21][24][25][26][27][28][29][30][37][38][39][40][41][42][43][67][68][69][70][71][72][73][74][75]. For the MLR model, the response (dependent) variable y is assumed to be a function of k independent variables x i .…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…In other words, MLR aims to find the linear function that minimizes the sum of the squares of errors (SSE) between the observed and the predicted data. An advantage of this method is the easy interpretation of the coefficients, which are generated in the model with low computational effort, in comparison to more complex techniques, such as energy balance methods and artificial intelligence algorithms [13][14][15][16][17][18][19][20][21][24][25][26][27][28][29][30][37][38][39][40][41][42][43][67][68][69][70][71][72][73][74][75]. For the MLR model, the response (dependent) variable y is assumed to be a function of k independent variables x i .…”
Section: Methodsmentioning
confidence: 99%
“…The measurement of the ETo is demanding. Therefore, several methods for estimating ETo have been developed, ranging from simple empirical or physically based models [13,14] to complex algorithms and techniques, such as fuzzy logic and machine learning (ML) [15][16][17][18][19][20][21]. These methods employ data from meteorological stations, or retrieved data via remote sensors [22][23][24][25][26][27][28][29][30][31][32].…”
Section: Introductionmentioning
confidence: 99%
“…A Long Short-Term Memory (LSTM) network model, presented by Hochreiter & Schmidhuber (1997), is one of the recurrent neural network (RNN)-based algorithms. The LSTM model was introduced to solve gradient vanishing and optimization errors occurring in the RNN model and capture long-term dependence that exists at various steps in the sequential time series data (Han & Morrison 2022;Roy et al 2022). The LSTM model is an extensible method for dealing with sequence time series data such as speech recognition and language translation.…”
Section: Lstm Modelmentioning
confidence: 99%
“…The LSTM model is an extensible method for dealing with sequence time series data such as speech recognition and language translation. Specifically, in the hydrologic field, it has been widely used to predict various hydrological factors, such as precipitation and runoff (Kratzert et al 2018;Han & Morrison 2022;Roy et al 2022). A diagram of the model structure of the LSTM cell is represented in Figure 3.…”
Section: Lstm Modelmentioning
confidence: 99%
“…In the input layer, the raw signal is processed in segments to construct input samples that satisfy the timelength requirement. The Bid-LSTM network layer extracts characteristic information at different scales, and the number of memory units in each LSTM layer is not the same [62]. The larger the number of memory units, the shorter the length of data processed by each memory unit in terms of time dimension, and the stronger the ability to extract high-frequency features in the signal.…”
Section: Deep Bid-lstm With Attention Mechanismmentioning
confidence: 99%