2022
DOI: 10.1109/tip.2022.3149237
|View full text |Cite
|
Sign up to set email alerts
|

Semantic Context-Aware Image Style Transfer

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 25 publications
(4 citation statements)
references
References 29 publications
0
4
0
Order By: Relevance
“…Subsequently, it iteratively optimizes the content image to match the style of the referenced image. Since then, numerous methods have been proposed to enhance various aspects of performance, such as controllability [18][19][20]22], text-driven [2,3,11,12] and quality [24][25][26][27][28].…”
Section: Style Transfermentioning
confidence: 99%
“…Subsequently, it iteratively optimizes the content image to match the style of the referenced image. Since then, numerous methods have been proposed to enhance various aspects of performance, such as controllability [18][19][20]22], text-driven [2,3,11,12] and quality [24][25][26][27][28].…”
Section: Style Transfermentioning
confidence: 99%
“…Yeh et al (2020) designed two statistical indicators of style validity and content consistency to evaluate the style transfer algorithm, which promotes the design of standardized evaluation methods. Liao and Huang (2022) proposed a new semantic context-aware IST approach by conducting semantic context matching adopted by hierarchical local to global network structure. Babaeizadeh and Ghiasi (2018) presented a method that could adjust and control the generated style images on the trained model in real time.…”
Section: Related Workmentioning
confidence: 99%
“…The scene structure and object boundary of the content image needs to be retained for a high-quality IST, and the appearance should be aligned with the style image. In recent years, IST technologies (Gatys et al, 2016;Ghiasi et al, 2017;Junginger et al, 2018;Karras et al, 2019;Li et al, 2018;Li & Wand, 2016;Liao & Huang, 2022;Qiao et al, 2021;Wang et al, 2020;Yao et al, 2019;Yeh et al, 2020;Zhang & Dana, 2017) based on deep learning (DL) have demonstrated that the relevance among features obtained by CNNs is very available for obtaining visual contents and styles, and is utilized to obtain images with similar contents and styles.…”
Section: Introductionmentioning
confidence: 99%
“…However, many of these methods usually require an iterative process or slow post-processing that is computationally expensive. Prior works also explore style transfer using visual cues from semantic segmentation [10], [21], [22], [23], improving the photorealism based on the matting Laplacian [18], [24], screened Poisson equation [25], or photorealistic smoothing [15] as a post-processing step. Our method belongs to the family of techniques in which segmentation information is explicitly used to address the style transfer problem.…”
Section: Style Transfermentioning
confidence: 99%