1998 IEEE International Joint Conference on Neural Networks Proceedings. IEEE World Congress on Computational Intelligence (Cat
DOI: 10.1109/ijcnn.1998.687211
|View full text |Cite
|
Sign up to set email alerts
|

A simultaneous learning method for both activation functions and connection weights of multilayer neural networks

Abstract: This paper proposes a simultaneous learning algorithm for both activation functions and connection weights. The activation function is composed of several basic functions, such as sigmoidal function, Gaussian function and so on. In order to avoid local minima, the activation functions are controlled and randomly disturbed every some epochs. The activation functions are automatically optimized for each application. Probability and speed of learning are higher than the conventionals.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 7 publications
(3 citation statements)
references
References 8 publications
0
3
0
Order By: Relevance
“…As the standard BP algorithm is based on the gradient descent method [23], the network training is carried out by calculating the gradient of the BP network weights and deviation to the objective function, the iterative process of the weight and t deviation can be expressed as:…”
Section: Bp Algorithm For Complementary Data Fusionmentioning
confidence: 99%
“…As the standard BP algorithm is based on the gradient descent method [23], the network training is carried out by calculating the gradient of the BP network weights and deviation to the objective function, the iterative process of the weight and t deviation can be expressed as:…”
Section: Bp Algorithm For Complementary Data Fusionmentioning
confidence: 99%
“…It is similar to the activation function training [6], [7]. Compared with the above, the hidden unit outputs are replaced by the gate unit outputs in the new structure.…”
Section: Simaltenious Learningmentioning
confidence: 99%
“…Therefore, the update formula proposed for the trainable activation functions can be basically applied [6], [7].…”
Section: Simaltenious Learningmentioning
confidence: 99%