This article proposes a generative adversarial network called explicit affine disentangled generative adversarial network (EAD-GAN), which explicitly disentangles affine transform in a self-supervised manner. We propose an affine transform regularizer to force the InfoGAN to have explicit properties of affine transform. To facilitate training an affine transform encoder, we decompose the affine matrix into two separate matrices and infer the explicit transform parameters by the least-squares method. Unlike the existing approaches, representations learned by the proposed EAD-GAN have clear physical meaning, where transforms, such as rotation, horizontal and vertical zooms, skews, and translations, are explicitly learned from training data. Thus, we set different values of each transform parameter individually to generate specifically affine transformed data by the learned network. We show that the proposed EAD-GAN successfully disentangles these attributes on the MNIST, CelebA, and dSprites datasets. EAD-GAN achieves higher disentanglement scores with a large margin compared to the state-of-the-art methods on the dSprites dataset. For example, on the dSprites dataset, EAD-GAN achieves the MIG and DCI score of 0.59 and 0.96 respectively, compared to 0.37 and 0.71, respectively, for the state-of-the-art methods.
Deep learning has shown unprecedented performance on computer vision tasks in recent years. One of the foundations of deep learning is the large datasets with human annotations. However, the datasets with human annotations are born with natural drawbacks. First, the cost of human annotations is expensive, especially with tasks such as segmentation. Next, the annotation itself may not be correct, which could be due to the subjective nature of the problem. Last but not least, if we wish the algorithm to evolve in real-world scenarios, it is not possible to keep annotating all the surrounding objects in real-time.To better utilize the algorithm in real-world scenarios, we want to deploy deep learning with minimal human annotation, for example, in an unsupervised or selfsupervised manner. To be more specific, we tackle this problem from the perspective of generative models and disentangled representation. With generative models, the outputs of the model can be visualized. With disentangled representation, different attributes learned by the model can be separated. The combination of those two approaches provides a pathway to aligning the visualized attributes with human instincts. To learn the disentangled representation in an unsupervised or self-supervised manner, we tackle this problem from the perspective of contrastive learning and inductive bias. With contrastive learning, we can produce more data samples by transforming the original data and comparing the differences between them. With inductive bias, we can formulate a meaningful relationship between the transformed and original data sample pairs. In this thesis, we demonstrate the effectiveness of inductive bias such as affine transforms and facial attributes.In summary, the thesis contributes to the disentangled image representation, which provides a pathway for us to understand the output of the generative model in a more vivid manner by visualizing the results and aligning with human intuition.
This paper proposes a Recurrent Affine Transform Encoder (RATE) that can be used for image representation learning. We propose a learning architecture that enables a CNN encoder to learn the affine transform parameter of images. The proposed learning architecture decomposes an affine transform matrix into two transform matrices and learns them jointly in a self-supervised manner. The proposed RATE is trained by unlabeled image data without any ground truth and infers the affine transform parameter of input images recurrently. The inferred affine transform parameter can be used to represent images in canonical form to greatly reduce the image variations in affine transforms such as rotation, scaling, and translation. Different from the spatial transformer network, the proposed RATE does not need to be embedded into other networks for training with the aid of other learning objectives. We show that the proposed RATE learns the affine transform parameter of images and achieves impressive image representation results in terms of invariance to translation, scaling, and rotation. We also show that the classification performance is enhanced and is more robust against distortion by incorporating the RATE into the existing classification model.INDEX TERMS Canonical image base, self-supervised learning, representation learning.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.