2017
DOI: 10.1017/s0269964817000201
|View full text |Cite
|
Sign up to set email alerts
|

Random Neural Network Learning Heuristics

Abstract: The random neural network (RNN) is a probabilitsic queueing theory-based model for artificial neural networks, and it requires the use of optimization algorithms for training. Commonly used gradient descent learning algorithms may reside in local minima, evolutionary algorithms can be also used to avoid local minima. Other techniques such as artificial bee colony (ABC), particle swarm optimization (PSO), and differential evolution algorithms also perform well in finding the global minimum but they converge slo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2017
2017
2022
2022

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 6 publications
(2 citation statements)
references
References 85 publications
(101 reference statements)
0
2
0
Order By: Relevance
“…The following methods were adopted for training: RNN-IDS with a gradient descent algorithm (GD), and RNN-ABC with an artificial bee colony algorithm (ABC). In addition to this all, methods for both ABC and GD were trained using variant learning rates of 0.4, 0.1, and 0.01, respectively, and additional comparisons were undertaken to discuss the effects of mean square error (MSE) by analysis of mean of MSE (MMSE), standard deviation of MSE (SDMSE), best mean squared error (BMSE), and worst mean squared error (WMSE) [30] during the RNN training and testing phases.…”
Section: Accuracymentioning
confidence: 99%
“…The following methods were adopted for training: RNN-IDS with a gradient descent algorithm (GD), and RNN-ABC with an artificial bee colony algorithm (ABC). In addition to this all, methods for both ABC and GD were trained using variant learning rates of 0.4, 0.1, and 0.01, respectively, and additional comparisons were undertaken to discuss the effects of mean square error (MSE) by analysis of mean of MSE (MMSE), standard deviation of MSE (SDMSE), best mean squared error (BMSE), and worst mean squared error (WMSE) [30] during the RNN training and testing phases.…”
Section: Accuracymentioning
confidence: 99%
“…It is reported in [142,143] that this algorithm achieved better performance than the gradient-descent algorithm in a combinatorial optimization problem emerging in disaster management. Javed et.al [96] proposed to combine the artificial bee colony/particle swarm optimization with the sequential quadratic programming optimization algorithm to train the RNN. The RNN was investigated for DL, and efficient DL algorithms based on the RNN were proposed [66,156].…”
Section: Learning In Random Neural Networkmentioning
confidence: 99%