2020 8th International Conference on Orange Technology (ICOT) 2020
DOI: 10.1109/icot51877.2020.9468799
|View full text |Cite
|
Sign up to set email alerts
|

Weight Dropout for Preventing Neural Networks from Overfitting

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
6
0
1

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
3

Relationship

0
9

Authors

Journals

citations
Cited by 19 publications
(8 citation statements)
references
References 2 publications
0
6
0
1
Order By: Relevance
“…Compared with the complexity of the model, the training dataset is small, which makes the model prone to overfitting. At present, researchers usually use methods such as expanding dataset, removing features, regularization, and terminating training in advance to prevent model overfitting ( Sanjar et al, 2020 ). Data enhancement is a way to increase training data, which can be realized by flipping, translation, rotation, scaling, and generation methods.…”
Section: Discussionmentioning
confidence: 99%
“…Compared with the complexity of the model, the training dataset is small, which makes the model prone to overfitting. At present, researchers usually use methods such as expanding dataset, removing features, regularization, and terminating training in advance to prevent model overfitting ( Sanjar et al, 2020 ). Data enhancement is a way to increase training data, which can be realized by flipping, translation, rotation, scaling, and generation methods.…”
Section: Discussionmentioning
confidence: 99%
“…Overfitting pada model dapat dicegah melalui beberapa cara, diantaranya seperti Dropout [4], Regularization [5], dan Batch Normalization [6]. Selain beberapa solusi tersebut, overfitting pada model dapat dicegah dengan menggunakan dataset dengan jumlah yang besar dan bervariasi.…”
Section: Pendahuluanunclassified
“…The adam optimizer [26] is the one that is most frequently used. Furthermore, to avoid the ANN model overfitting, we applied the dropout strategy [27], which involves briefly removing neurons from the hidden layers during model training. Figure 2 shows the structure of the ANN model with the best hyper-parameters values for the dysgraphia dataset determined by the keras-tuner.…”
Section: Artificial Neural Network Modelmentioning
confidence: 99%
“…Our model is made up of an input layer with 15 neurons representing dataset dimensions and three hidden layers, each with 22 neurons, 26 neurons, and 14 neurons. We used two dropout layers (with p=0.5) after each of the two first fully connected hidden layers, as proposed in the original paper [27], and an output layer with one neuron for binary dysgraphia classifying.…”
Section: Artificial Neural Network Modelmentioning
confidence: 99%