2021
DOI: 10.3906/elk-2006-143
|View full text |Cite
|
Sign up to set email alerts
|

Determining overfitting and underfitting in generative adversarial networks using Fréchet distance

Abstract: Generative Adversarial Networks (GANs) can be used in a wide range of applications where drawing samples from a data probability distribution without explicitly representing it is essential. Unlike the deep Convolutional Neural Networks (CNNs) trained for mapping an input to one of the multiple outputs, monitoring the overfitting and underfitting in GANs is not trivial since they are not classifying but generating a data. While training set and validation set accuracy give a direct sense of success in terms of… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
2
1

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 5 publications
0
2
0
Order By: Relevance
“…These results suggest that an ensemble algorithm could provide a higher prediction performance than a single approach . In addition, the overfitting and underfitting models will lead to an increased generalization error in machine learning. , Then, the learning curves were drawn to evaluate the ability in different amounts of data. The error decreases as the number of samples increases by contrasting the learning curve scores (10-fold cross-validation), where PLS, RF, and GBR could converge on a higher score level compared to SVM (underfit), especially RF, which has the highest R 2 (the number of training samples = 712, training R 2 = 0.998, testing R 2 = 0.982), as shown in Figure c–h.…”
Section: Resultsmentioning
confidence: 99%
“…These results suggest that an ensemble algorithm could provide a higher prediction performance than a single approach . In addition, the overfitting and underfitting models will lead to an increased generalization error in machine learning. , Then, the learning curves were drawn to evaluate the ability in different amounts of data. The error decreases as the number of samples increases by contrasting the learning curve scores (10-fold cross-validation), where PLS, RF, and GBR could converge on a higher score level compared to SVM (underfit), especially RF, which has the highest R 2 (the number of training samples = 712, training R 2 = 0.998, testing R 2 = 0.982), as shown in Figure c–h.…”
Section: Resultsmentioning
confidence: 99%
“…In machine learning, overfitting and underfitting models will lead to the increase in generalization error (Eken, 2021;Salam, Azar, Elgendy, & Fouad, 2021). RF could be balanced by adjusting parameters and the prediction power for unknown samples would be fully realized (Hou et al, 2021;Torre-Tojal, Bastarrika, Boyano, Lopez-Guede, & Graña, 2022).…”
Section: Rf Regulation and Model Validationmentioning
confidence: 99%