2019
DOI: 10.3390/app9224780
|View full text |Cite
|
Sign up to set email alerts
|

Image-To-Image Translation Using a Cross-Domain Auto-Encoder and Decoder

Abstract: Recently, several studies have focused on image-to-image translation. However, the quality of the translation results is lacking in certain respects. We propose a new image-to-image translation method to minimize such shortcomings using an auto-encoder and an auto-decoder. This method includes pre-training two auto-encoders and decoder pairs for each source and target image domain, cross-connecting two pairs and adding a feature mapping layer. Our method is quite simple and straightforward to adopt but very ef… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
11
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
8
1

Relationship

1
8

Authors

Journals

citations
Cited by 19 publications
(12 citation statements)
references
References 24 publications
0
11
0
Order By: Relevance
“…In the field of computer vision, Ref. [36] shares similar concept with ours for the image translation task.…”
Section: Reusing Pre-trained Modelsmentioning
confidence: 92%
See 1 more Smart Citation
“…In the field of computer vision, Ref. [36] shares similar concept with ours for the image translation task.…”
Section: Reusing Pre-trained Modelsmentioning
confidence: 92%
“…The additional parameters help the components to adapt to each other. This is similar to the 'feature mapping layer' from [36] that fills the gap between the representations, which the encoder is pre-trained to generate and the decoder is pre-trained to reconstruct from. It can be said that this layer maps the separately pre-trained language spaces.…”
Section: Intermediate Layermentioning
confidence: 99%
“…Moreover, we also incorporated an autoencoder model [42] into the trained network. The autoencoder is an unsupervised learning technique that consists of two networks, encoder and decoder.…”
Section: Automated Transformation Designmentioning
confidence: 99%
“…Auto-encoders may be used for data compression 35 in which the compressed representation is used to keep the information in a more compact format, as well as denoising, in which the model is trained to reconstruct clean data from noisy input. It can also be used for image-to-image translation 36 by randomly sampling from the compressed representation and decoding it to generate a new image as well as dimensionality reduction 37 40 by training an auto-encoder, the network can learn a compressed representation of the data that captures the most important features and capable of generating new images such as variational auto-encoder (VAE) 41 . In the auto-encoder-based model, the latent space layer is responsible for performing dimensionality reduction.…”
Section: Introductionmentioning
confidence: 99%