2018
DOI: 10.5194/gi-2018-13
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Backpropagation Neural Network as Earthquake Early Warning Tool using a new Elementary Modified Levenberg–Marquardt Algorithm to minimise Backpropagation Errors

Abstract: Abstract.A new Elementary Modified Levenberg-Marquardt Algorithm (M-LMA) was used to minimise backpropagation errors in training a backpropagation neural network (BPNN) to predict the records related to the Chi-Chi earthquake from four seismic stations, Station-TAP003, Station-TAP005, Station-TCU084 and Station-TCU078, with the learning rates of 0.3, 10 0.05, 0.2 and 0.28, respectively. For these four recording stations, the M-LMA has been shown to produce smaller predicted errors compared to LMA. A sudden pre… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(4 citation statements)
references
References 19 publications
0
4
0
Order By: Relevance
“…The forward-propagation process and the back-propagation process of the BPNN algorithm constitute the common process of error propagation. In the forward-propagation process of the signal, it is necessary to provide input samples to each neuron of the input layer, calculate the net input and output of the output layer and each hidden layer, and then calculate the prediction results of the neural network [ 14 , 15 ]. If there is a large error between the calculated prediction result and the expected output, the back-propagation of the error is started.…”
Section: Automatic Tqa Modelmentioning
confidence: 99%
See 1 more Smart Citation
“…The forward-propagation process and the back-propagation process of the BPNN algorithm constitute the common process of error propagation. In the forward-propagation process of the signal, it is necessary to provide input samples to each neuron of the input layer, calculate the net input and output of the output layer and each hidden layer, and then calculate the prediction results of the neural network [ 14 , 15 ]. If there is a large error between the calculated prediction result and the expected output, the back-propagation of the error is started.…”
Section: Automatic Tqa Modelmentioning
confidence: 99%
“…In Eq (15), β-the weight coefficient to control the sparsity penalty. After the loss function J SAE (θ) is minimized, the parameter θ can be obtained.…”
Section: Plos Onementioning
confidence: 99%
“…It was multi-layer feed forward nets and used chain rule in iterative manner for calculating the gradient function for each layer. It important to call a suitable activation function for each layer [11][12][13][14].…”
Section: Introductionmentioning
confidence: 99%
“…It is a generalization of the delta rule to multi-layered feed-forward networks, made possible by using the chain rule to iteratively compute gradients for each layer. Back-propagation requires an activation function ( that is used by the artificial neurons ) to be differentiable [10][11] [12].…”
Section: Introductionmentioning
confidence: 99%