2020
DOI: 10.29252/aassjournal.799
|View full text |Cite
|
Sign up to set email alerts
|

Predicting the Medals of the Countries Participating in the Tokyo 2020 Olympic Games Using the Test of Networks of Multilayer Perceptron (MLP)

Abstract: Background. International successes, especially in the Olympic Games, have become significantly important to many countries. Hence, the prediction can be better planning to gain this goal. Objectives. This study was conducted to predict the success of the participating countries in the Tokyo Olympic Games and this it was done using smart methods. Methods. This study was conducted in two stages of qualitative (determination of indicators) and quantitative (collecting data on selected countries). In the first st… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
2
1
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(2 citation statements)
references
References 23 publications
0
2
0
Order By: Relevance
“…The components of the models are nodes, weights, and layers (input, hidden, and output layers) (41). MLP-ANN is the simplest and most commonly used ANN architecture due to its structural flexibility, good representational capabilities, and a large number of training algorithms [47,48]. In this study, in order to develop an MLP-ANN, we used 12 training algorithms, including Levenberg-Marquardt (LM), Bayesian regularization (BR), Broyden-Fletcher-Goldfarb-Shanno (BFGS) Quasi-Newton, resilient backpropagation (RP), scaled conjugate gradient (SGC), conjugate gradient with Powell/ Beale (CGB) restarts, conjugate gradient Fletcher-Powell (CGF), conjugate gradient with Polak-Ribiére (CGP) updates, one step secant (OSS), gradient descent variable learning rate (GDX), gradient descent with momentum (GDM), and gradient descent (GD) backpropagation described in Table 2.…”
Section: Model Developmentmentioning
confidence: 99%
“…The components of the models are nodes, weights, and layers (input, hidden, and output layers) (41). MLP-ANN is the simplest and most commonly used ANN architecture due to its structural flexibility, good representational capabilities, and a large number of training algorithms [47,48]. In this study, in order to develop an MLP-ANN, we used 12 training algorithms, including Levenberg-Marquardt (LM), Bayesian regularization (BR), Broyden-Fletcher-Goldfarb-Shanno (BFGS) Quasi-Newton, resilient backpropagation (RP), scaled conjugate gradient (SGC), conjugate gradient with Powell/ Beale (CGB) restarts, conjugate gradient Fletcher-Powell (CGF), conjugate gradient with Polak-Ribiére (CGP) updates, one step secant (OSS), gradient descent variable learning rate (GDX), gradient descent with momentum (GDM), and gradient descent (GD) backpropagation described in Table 2.…”
Section: Model Developmentmentioning
confidence: 99%
“…The components of the models are nodes, weights, and layers (input, hidden, and output layers) (41). Multilayer perceptron-ANN (MLP-ANN) is the simplest and most commonly used ANN architecture due to its structural exibility, good representational capabilities, and a large number of trainable algorithms (30,31).…”
Section: Model Developmentmentioning
confidence: 99%