Recently, several studies have focused on image-to-image translation. However, the quality of the translation results is lacking in certain respects. We propose a new image-to-image translation method to minimize such shortcomings using an auto-encoder and an auto-decoder. This method includes pre-training two auto-encoders and decoder pairs for each source and target image domain, cross-connecting two pairs and adding a feature mapping layer. Our method is quite simple and straightforward to adopt but very effective in practice, and we experimentally demonstrated that our method can significantly enhance the quality of image-to-image translation. We used the well-known cityscapes, horse2zebra, cat2dog, maps, summer2winter, and night2day datasets. Our method shows qualitative and quantitative improvements over existing models. Appl. Sci. 2019, 9, 4780 2 of 16 translation method. In this paper, we propose a new translation method where the mapping process from source domain to target domain can work better, and thus suppress mistranslation by transferring the learning of domain and target feature spaces using both an auto-encoder and decoder. In contrast to commonly-used feature space learning methods [9][10][11][12][13], our approach utilizes an auto-encoder and an auto-decoder-which are pretrained for the source domain and target domain, respectively-and cross-connects them by mapping their feature spaces across the two domains. Appl. Sci. 2019, 9, x FOR PEER REVIEW 2 of 17 Author Contributions: Conceptualization and methodology, J.Y., H.E., and Y.S.C.; investigation, J.Y. and H.E.; data curation and conceiving experiments, J.Y.; performing experiments and designing results, H.E.; writing-original draft preparation, J.Y. and H.E.; writing-review and editing, J.Y., H.E., and Y.S.C. Appendix AWe provide our source code at https://github.com/Rooooss/I2I_CD_AE.data curation and conceiving experiments, J.Y.; performing experiments and designing results, H.E.; writingoriginal draft preparation, J.Y. and H.E.; writing-review and editing, J.Y., H.E., and Y.S.C.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.