2014
DOI: 10.1134/s0097807814060153
|View full text |Cite
|
Sign up to set email alerts
|

The use of feed-forward back propagation and cascade correlation for the neural network prediction of surface water quality parameters

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 11 publications
(3 citation statements)
references
References 14 publications
0
3
0
Order By: Relevance
“…A constructive neural network that aims to solve the problems of the determination of potential neurons which are not relevant to the output layer [49] MNNs A special feedforward network Choosing the neural network which have the maximum similarity between the inputs and centroids of the cluster Solving the problem of low prediction accuracy [30,50] RNNs The RNNs are developed with the development of deep learning Solving the problems of long-term dependence which are not captured by the feedforward network [12,31,38,51,52] LSTMs Its structure is similar to RNNs Memory cell state is added to hidden layer Addressing the well-known vanishing gradient problem of RNNs [15,26,45,53,54] TLRN Its structure is similar to MLPs It has the local recurrent connections in the hidden layer…”
Section: Mlpsmentioning
confidence: 99%
See 1 more Smart Citation
“…A constructive neural network that aims to solve the problems of the determination of potential neurons which are not relevant to the output layer [49] MNNs A special feedforward network Choosing the neural network which have the maximum similarity between the inputs and centroids of the cluster Solving the problem of low prediction accuracy [30,50] RNNs The RNNs are developed with the development of deep learning Solving the problems of long-term dependence which are not captured by the feedforward network [12,31,38,51,52] LSTMs Its structure is similar to RNNs Memory cell state is added to hidden layer Addressing the well-known vanishing gradient problem of RNNs [15,26,45,53,54] TLRN Its structure is similar to MLPs It has the local recurrent connections in the hidden layer…”
Section: Mlpsmentioning
confidence: 99%
“…Results demonstrated the effectiveness of the proposed methods. Researchers tended to divide the training set data into 70% to 90% of the total data [39,42,49,52,72,[120][121][122][123][124][125][126][127]. Iglesias et al [35] divided the data into training (90%) and testing sets (10%).…”
Section: Artificial Neural Network Models For Water Quality Predictionmentioning
confidence: 99%
“…ANN's have demonstrated the ability to derive highly nonlinear relationships, and they can be continually updated as more data is collected. Furthermore, this process modeling approach has been successfully implemented by a number of researchers to develop accurate model predictions in a number of application areas, including fuel cells, batteries, heat exchangers, chemical reactions, and surface water quality parameters (Yu and Gomm, 2003;Singh et al, 2009;Vasickaninova et al, 2011;Shen et al, 2013;Chen et al, 2014;Elbisy et al, 2014).…”
Section: Artificial Neural Networkmentioning
confidence: 99%