The recent success of deep neural networks is attributed in part to large-scale well-labeled training data. However, with the ever-increasing size of modern datasets, combined with the difficulty of obtaining label information, semi-supervised learning (SSL) has become one of the most remarkable issues in data analysis. In this paper, we propose an Incremental Self-Labeling strategy for SSL based on Generative Adversarial Nets (ISL-GAN), which functions by constantly assigning unlabeled data with virtual labels for promoting the training process. Specifically, during the virtual labeling process, we introduce a temporal-based self-labeling strategy for safe and stable data labeling. Then, to dynamically assign more virtual labels to data during the training, we conduct a phased incremental label screening and updating strategy. Finally, to balance the contribution of samples with different loss during the training process, we further introduce the Balance factor Term (BT). The experimental results show that the proposed method gives rise to state-of-the-art semi-supervised learning results for the MNIST, CIFAR-10, and SVHN datasets. Particularly, our model performs well with fewer labeled conditions. With a dataset of only 1,000 labeled CIFAR-10 images with CONV-Large Net, a test error of 11.2% can be achieved, and nearly the same performance with a 3.5% test error can be achieved with both 500 and 1,000 image-labeled SVHN datasets. INDEX TERMS Deep learning, semi-supervised learning, generative adversarial networks, self-labeling.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.