2021
DOI: 10.1109/access.2021.3087665
|View full text |Cite
|
Sign up to set email alerts
|

Multi-Style Unsupervised Image Synthesis Using Generative Adversarial Nets

Abstract: Unsupervised cross-domain image-to-image translation is a very active topic in computer vision and graphics. This task has two challenges: 1) lack of paired training data and 2) numerous possible outputs from a single image. The existing methods rely on either paired data or perform one-to-one translation. A novel Multi-Style Unsupervised image synthesis model using Generative Adversarial Nets (MSU-GAN) is proposed in this paper to overcome these disadvantages. Firstly, the encoder-decoder structure is used to… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
2
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 30 publications
0
2
0
Order By: Relevance
“…And then plotted the autocorrelation plot for visualization. We then used the Xgb model to check if there is overfitting using the validation and training errors [19][20][21][22].…”
Section: Gans On Yahoo Stock Datamentioning
confidence: 99%
“…And then plotted the autocorrelation plot for visualization. We then used the Xgb model to check if there is overfitting using the validation and training errors [19][20][21][22].…”
Section: Gans On Yahoo Stock Datamentioning
confidence: 99%
“…For example, Gao et al [15] decomposed image stylization into four steps: feature encoding, feature destylization, feature restylization, and feature decoding, and the model's capabilities of retaining contents and capturing style were improved. Lv et al [16] and Lee et al [17] resolved image transformation into two aspects, style and contents, and encoded them separately, subsequently, the generation process takes both aspects as input. Liu et al [18] proposed UNIT, which can transform different scenes, species, and faces.…”
Section: Related Workmentioning
confidence: 99%
“…However, this method still lacks the ability to accurately distinguish between known classes and anomalies. The second method involves resynthesizing a new input image from the semantic map predicted by the segmentation network, and detecting anomalies by analyzing the feature differences among the original input image, the predicted semantic map, and the synthesized image [11]. This method exhibits a high extraction effect when dealing with objects outside the network's training classification but introduces a problem that cannot be ignored.…”
Section: Introductionmentioning
confidence: 99%