2019
DOI: 10.48550/arxiv.1906.06558
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Mask Based Unsupervised Content Transfer

Abstract: We consider the problem of translating, in an unsupervised manner, between two domains where one contains some additional information compared to the other. The proposed method disentangles the common and separate parts of these domains and, through the generation of a mask, focuses the attention of the underlying network to the desired augmentation alone, without wastefully reconstructing the entire target. This enables state-of-the-art quality and variety of content translation, as shown through extensive qu… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2021
2021
2021
2021

Publication Types

Select...
1

Relationship

1
0

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 32 publications
0
2
0
Order By: Relevance
“…UNIT [22] and MUNIT [14], for example, use a shared representation for image-to-image translation. Other works [29,6,25] use shared representations to disentangle the common content of two domains from the separate part. Unlike these methods, our work disentangles motion from style over videos.…”
Section: Shared Geometric Representationmentioning
confidence: 99%
See 1 more Smart Citation
“…UNIT [22] and MUNIT [14], for example, use a shared representation for image-to-image translation. Other works [29,6,25] use shared representations to disentangle the common content of two domains from the separate part. Unlike these methods, our work disentangles motion from style over videos.…”
Section: Shared Geometric Representationmentioning
confidence: 99%
“…The generator consists of 9 residual blocks, each contains convolution, ReLU, and Instance Normalization layers. The discriminator consists of 3 fully connected and Leaky ReLU layers, followed by a final sigmoid activation, similarly to Mokady et al [25].…”
Section: B2 Network Architecturementioning
confidence: 99%