2021
DOI: 10.1007/s42979-021-00757-8
|View full text |Cite
|
Sign up to set email alerts
|

Multistage Newton’s Approach for Training Radial Basis Function Neural Networks

Abstract: A systematic four-step batch approach is presented for the second-order training of radial basis function (RBF) neural networks for estimation. First, it is shown that second-order training works best when applied separately to several disjoint parameter subsets. Newton's method is used to find distance measure weights, leading to a kind of embedded feature selection. Next, separate Newton's algorithms are developed for RBF spread parameters, center vectors, and output weights. The final algorithm's training e… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
1
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(5 citation statements)
references
References 50 publications
0
5
0
Order By: Relevance
“…An auto encoder (AE) [21], [54] as shown in figure 1 that has the same structure as that of an fully connected regression type MLP, except that the bypass weights [122] are removed. In an autoencoder framework, an input layer represents the original data, the hidden layer represents the transformed features and the output layer matches the input layer for reconstruction.…”
Section: Deep Autoencodermentioning
confidence: 99%
See 2 more Smart Citations
“…An auto encoder (AE) [21], [54] as shown in figure 1 that has the same structure as that of an fully connected regression type MLP, except that the bypass weights [122] are removed. In an autoencoder framework, an input layer represents the original data, the hidden layer represents the transformed features and the output layer matches the input layer for reconstruction.…”
Section: Deep Autoencodermentioning
confidence: 99%
“…The high dependency on first order gradient based training as in (D1) is an appealing area of research. Certainly, there is a need to investigate non-gradient methods like Grahm-Schmidt procedure along with our successful implementation of second order MLP training algorithms [83], [100], [122], [64], [47], [92]. As mentioned in (D2), the choice of activation function is extremely important while training the deep learner.…”
Section: Problems With Deep Learning Networkmentioning
confidence: 99%
See 1 more Smart Citation
“…Second-order algorithms consider the convexity (curvature) of objective functions by the Hessian matrix. This approach is called the Newton optimization method [12]. Computations of an inverted Hessian matrix make second-order optimization more complex.…”
Section: Introductionmentioning
confidence: 99%
“…Computations of inverted Hessian matrix make the second optimization more complex. This approach is called Newton optimization method [12]. In machine learning, where number of neurons can attain over one hundred, Newton optimization is an ineffective tool, but the approximation of inverted Hessian matrix makes possible to minimize the loss function for required time consumption.…”
Section: Introductionmentioning
confidence: 99%