2014
DOI: 10.1007/s00500-014-1491-6
|View full text |Cite
|
Sign up to set email alerts
|

Sine neural network (SNN) with double-stage weights and structure determination (DS-WASD)

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 10 publications
(5 citation statements)
references
References 17 publications
0
5
0
Order By: Relevance
“…Then, a weights and structure determination (WASD) method based on the weights direct determination (WDD) method is used to train the date. [28][29][30][31] Therefore WASD method is presented to analyze annual electricity production and a three-layer feedforward neuronet activated by sine is constructed in this paper, of which the model diagram is illustrated in Figure 1. The weights from the input layer to the hidden layer are determined randomly and the weights from the hidden layer to the output layer are calculated by the pseudo-inverse.…”
Section: Discussionmentioning
confidence: 99%
“…Then, a weights and structure determination (WASD) method based on the weights direct determination (WDD) method is used to train the date. [28][29][30][31] Therefore WASD method is presented to analyze annual electricity production and a three-layer feedforward neuronet activated by sine is constructed in this paper, of which the model diagram is illustrated in Figure 1. The weights from the input layer to the hidden layer are determined randomly and the weights from the hidden layer to the output layer are calculated by the pseudo-inverse.…”
Section: Discussionmentioning
confidence: 99%
“…The excitation function adopted by the traditional BP neural network is usually sigmoid, tanh, and ReLU, while this paper employs a set of linear independent orthogonal polynomials, which are Chebyshev polynomials instead of Sigmoid function, as the excitation function, as shown in Figure 5. A large amount of literature [17][18][19] has verified that using Chebyshev polynomials as the excitation function can effectively optimize the BP neural network. In the experiment of this paper, in contrast to networks and LSTM networks that use Sigmoid as the excitation function, the error of the network using CP as the excitation function declines faster and more steadily, and the prediction accuracy is also higher at the same time.…”
Section: Convert Sequence Features Into Target Output 231 Cp Combined...mentioning
confidence: 99%
“…There is a myriad of improvements in BP neural networks made by researchers, one of which is to change the excitation function of the BP neural network. For example, Zhang et al [17] took the sine function as the excitation function of the BP neural network. CP is a set of orthogonal polynomials that is often used for function approximation.…”
Section: Introductionmentioning
confidence: 99%
“…It was, for example, used recently with a data-driven determination of a network's biases in [139]. Just the sine function without any scaling was used as an activation in [347][348][349][350][351][352].…”
Section: Sinementioning
confidence: 99%