2016
DOI: 10.1166/jctn.2016.5319
|View full text |Cite
|
Sign up to set email alerts
|

Using Orthogonal Grey Wolf Optimizer with Mutation for Training Multi-Layer Perceptron Neural Network

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2020
2020
2021
2021

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(1 citation statement)
references
References 0 publications
0
1
0
Order By: Relevance
“…As can be seen, by equations (19)–(22), The weights and biases are the main elements of MLPs, that determine the final output values based on given inputs (Zhang et al , 2016). The precise description of training MLPs is to find appropriate weights and biases values to reach the desired relationship between inputs and outputs (Mirjalili, 2015).…”
Section: Feed-forward Neural Network and Multi-layer Perceptronmentioning
confidence: 99%
“…As can be seen, by equations (19)–(22), The weights and biases are the main elements of MLPs, that determine the final output values based on given inputs (Zhang et al , 2016). The precise description of training MLPs is to find appropriate weights and biases values to reach the desired relationship between inputs and outputs (Mirjalili, 2015).…”
Section: Feed-forward Neural Network and Multi-layer Perceptronmentioning
confidence: 99%