2022
DOI: 10.3390/su141912479
|View full text |Cite
|
Sign up to set email alerts
|

A Comparative Study of Engraved-Digit Data Augmentation by Generative Adversarial Networks

Abstract: In cases where an efficient information retrieval (IR) system retrieves information from images with engraved digits, as found on medicines, creams, ointments, and gels in squeeze tubes, the system needs to be trained on a large dataset. One of the system applications is to automatically retrieve the expiry date to ascertain the efficacy of the medicine. For expiry dates expressed in engraved digits, it is difficult to collect the digit images. In our study, we evaluated the augmentation performance for a limi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
2

Relationship

2
3

Authors

Journals

citations
Cited by 5 publications
(6 citation statements)
references
References 38 publications
0
6
0
Order By: Relevance
“…GANs are powerful tools for data augmentation, because they can generate synthetic data that closely resemble real-world samples [1][2][3]. We explored the effectiveness of GAN-based augmentation techniques, such as Wasserstein GAN with a Gradient Penalty (WGAN-GP) and WGAN with Divergence (WGAN-DIV) [31], in improving the recognition performance of classifiers. The WGAN-GP and WGAN-DIV models, among several other GAN models, demonstrated exceptional performance during execution on the engraved digit dataset.…”
Section: Methodsmentioning
confidence: 99%
“…GANs are powerful tools for data augmentation, because they can generate synthetic data that closely resemble real-world samples [1][2][3]. We explored the effectiveness of GAN-based augmentation techniques, such as Wasserstein GAN with a Gradient Penalty (WGAN-GP) and WGAN with Divergence (WGAN-DIV) [31], in improving the recognition performance of classifiers. The WGAN-GP and WGAN-DIV models, among several other GAN models, demonstrated exceptional performance during execution on the engraved digit dataset.…”
Section: Methodsmentioning
confidence: 99%
“…The addition of emotions to neutral faces to increase the number of underrepresented categories has also been performed. Previous studies have explored augmentation techniques and synthetic datasets have used GANs in various domains; for example, Abdulraheem et al [ 35 ], conducted a study in which they leveraged GAN models to develop additional datasets to offer correct results for the automatic recognition of expiration dates in photos. This type of recognition demands a significant number of data for learning purposes.…”
Section: Related Studiesmentioning
confidence: 99%
“…Collecting a large volume of this dataset was a challenging task because finding a large number of high-quality and sufficiently varied engraved digits to train an efficient recognition model was difficult. We overcame this challenge by augmenting the datasets with images generated by a Wasserstein GAN with a gradient penalty (WGAN-GP) and a Wasserstein divergence for the GAN (WGAN-DIV) [48]. In [48], we conducted a comprehensive evaluation of various state-of-the-art GAN models on engraved digit datasets, including MMGAN, NSGAN, LSGAN, ACGAN, DCGAN, WGAN, WGAN-GP, WGAN-DIV, DRA-GAN, BEGAN, EBGAN, and VAE.…”
Section: Engraved Digit Datasetsmentioning
confidence: 99%
“…We overcame this challenge by augmenting the datasets with images generated by a Wasserstein GAN with a gradient penalty (WGAN-GP) and a Wasserstein divergence for the GAN (WGAN-DIV) [48]. In [48], we conducted a comprehensive evaluation of various state-of-the-art GAN models on engraved digit datasets, including MMGAN, NSGAN, LSGAN, ACGAN, DCGAN, WGAN, WGAN-GP, WGAN-DIV, DRA-GAN, BEGAN, EBGAN, and VAE. We assessed the performance of each GAN model by combining a visual inspection of the generated samples and a calculation of the Fréchet Inception Distance values.…”
Section: Engraved Digit Datasetsmentioning
confidence: 99%
See 1 more Smart Citation