2017
DOI: 10.1145/3072959.3073683
|View full text |Cite
|
Sign up to set email alerts
|

Visual attribute transfer through deep image analogy

Abstract: We propose a new technique for visual attribute transfer across images that may have very different appearance but have perceptually similar semantic structure. By visual attribute transfer, we mean transfer of visual information (such as color, tone, texture, and style) from one image to another. For example, one image could be that of a painting or a sketch while the other is a photo of a real scene, and both depict the same type of scene. Our technique finds semantically-meaningful de… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
362
0
1

Year Published

2018
2018
2023
2023

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 442 publications
(364 citation statements)
references
References 41 publications
1
362
0
1
Order By: Relevance
“…Although it produces impressive results for some style exemplars, it was shown to suffer from certain high‐frequency artifacts caused by the parametric nature of the synthesis algorithm [FJL∗16, FJS∗17]. To prevent texture distortion, researchers have proposed techniques to combine the advantages of patch‐based synthesis and the deep features learned by neural network [LW16, LYY∗17]. These approaches, however, have significant computational overhead and are not suitable for real‐time applications.…”
Section: Related Workmentioning
confidence: 99%
“…Although it produces impressive results for some style exemplars, it was shown to suffer from certain high‐frequency artifacts caused by the parametric nature of the synthesis algorithm [FJL∗16, FJS∗17]. To prevent texture distortion, researchers have proposed techniques to combine the advantages of patch‐based synthesis and the deep features learned by neural network [LW16, LYY∗17]. These approaches, however, have significant computational overhead and are not suitable for real‐time applications.…”
Section: Related Workmentioning
confidence: 99%
“…Discussion . Our constrained mapping was inspired by the nearest‐neighbor field upsampling used in the Deep Analogy work [LYY*17, §4.4] that constrains the matches at each layer to come from a local region around the location at a previous layer. When the input and style images are similar, this technique performs well.…”
Section: Painterly Harmonization Algorithmmentioning
confidence: 99%
“…Our method automatically harmonizes the compositing of an element into a painting. Given the proposed painting and element on the left, we show the compositing results (cropped for best fit) of unadjusted cut‐and‐paste, Deep Image Analogy [LYY*17], and our method.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…However, the training and test images are not truly watercolour style. A recent work attempts to combine both the Image Analogies and deep learning approaches [17], but is computationally expensive. Such example based approaches have some limitations.…”
Section: Related Workmentioning
confidence: 99%