2022
DOI: 10.1007/978-3-031-19803-8_2
|View full text |Cite
|
Sign up to set email alerts
|

Unpaired Image Translation via Vector Symbolic Architectures

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
2
2

Relationship

0
7

Authors

Journals

citations
Cited by 20 publications
(8 citation statements)
references
References 21 publications
0
8
0
Order By: Relevance
“…Baselines We compare with GAN-based image translation methods including MUNIT (Huang et al 2018), LPTN (Liang, Zeng, and Zhang 2021), CUT (Park et al 2020), TSIT (Jiang et al 2020), F-Sesim (Zheng, Cham, and Cai 2021) and VSAIT (Theiss et al 2022).…”
Section: Methodsmentioning
confidence: 99%
See 3 more Smart Citations
“…Baselines We compare with GAN-based image translation methods including MUNIT (Huang et al 2018), LPTN (Liang, Zeng, and Zhang 2021), CUT (Park et al 2020), TSIT (Jiang et al 2020), F-Sesim (Zheng, Cham, and Cai 2021) and VSAIT (Theiss et al 2022).…”
Section: Methodsmentioning
confidence: 99%
“…As shown in Figures 5-7, the MUNIT (Huang et al 2018), CUT (Park et al 2020), TSIT (Jiang et al 2020) and F-Sesim (Zheng, Cham, and Cai 2021) suffer from structural distortions and artifacts. Although LPTN (Liang, Zeng, and Zhang 2021) and VSAIT (Theiss et al 2022) can retain the structural features better (see Figures 5 and 6), they have limited style transfer ability. VSAIT (Theiss et al 2022) produces a satisfactory result when the source and target domains share similar styles (see Figure 7), whereas it still lacks style transfer ability.…”
Section: Qualitative Comparisonsmentioning
confidence: 99%
See 2 more Smart Citations
“…Hyperdimensional Prototypes Refinement Recently, Hyperdimensional Computing has been used in computer vision tasks like few shot learning [17,26], outof-distribution detection [45], and image translation [42], which leverage quasi-orthogonal hyperdimensional representations without inducing much training and inference overhead. The initial hyperdimensional prototype (hp) is obtained based on the single-pass raw data as:…”
Section: Dual-prototype Self-augment and Refinementmentioning
confidence: 99%