2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2021
DOI: 10.1109/cvpr46437.2021.00093
|View full text |Cite
|
Sign up to set email alerts
|

DualAST: Dual Style-Learning Networks for Artistic Style Transfer

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
74
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
3
3

Relationship

3
6

Authors

Journals

citations
Cited by 65 publications
(74 citation statements)
references
References 24 publications
0
74
0
Order By: Relevance
“…The pioneering work of Gatys et al [2016] first demonstrated the strength of Deep Convolutional Neural Networks (DC-NNs) in artistic style transfer, where the content and style can be expressed as multi-level feature statistics extracted from the pre-trained DCNNs. Since then, extensive works have been proposed to improve the performance of artistic style transfer in several aspects, such as efficiency [Johnson et al, 2016;Ulyanov et al, 2016], quality [Li and Wand, 2016a;Ulyanov et al, 2017;Zhang et al, 2019;Chen et al, 2021b;Chen et al, 2021a], generalization [Chen and Schmidt, 2016;Li et al, 2017b;Sheng et al, 2018;, diversity [Li et al, 2017a;Ulyanov et al, 2017;Chen et al, 2021c], and controllability [Babaeizadeh and Ghiasi, 2018;Yao et al, 2019].…”
Section: Methods Based On Pre-trained Networkmentioning
confidence: 99%
“…The pioneering work of Gatys et al [2016] first demonstrated the strength of Deep Convolutional Neural Networks (DC-NNs) in artistic style transfer, where the content and style can be expressed as multi-level feature statistics extracted from the pre-trained DCNNs. Since then, extensive works have been proposed to improve the performance of artistic style transfer in several aspects, such as efficiency [Johnson et al, 2016;Ulyanov et al, 2016], quality [Li and Wand, 2016a;Ulyanov et al, 2017;Zhang et al, 2019;Chen et al, 2021b;Chen et al, 2021a], generalization [Chen and Schmidt, 2016;Li et al, 2017b;Sheng et al, 2018;, diversity [Li et al, 2017a;Ulyanov et al, 2017;Chen et al, 2021c], and controllability [Babaeizadeh and Ghiasi, 2018;Yao et al, 2019].…”
Section: Methods Based On Pre-trained Networkmentioning
confidence: 99%
“…The seminal works of Bethge 2016, 2015) have proved the power of Deep Convolutional Neural Networks (DC-NNs) (Simonyan and Zisserman 2014) in style transfer and texture synthesis, where the Gram matrices of the features extracted from different layers of DCNNs are used to represent the style of images. Further works improved it in many aspects, including efficiency (Johnson, Alahi, and Fei-Fei 2016), quality (Jing et al 2018;Kolkin, Salavon, and Shakhnarovich 2019;Park and Lee 2019;Wang et al 2020bWang et al , 2021Chen et al 2020Chen et al , 2021bAn et al 2021), generality (Li et al 2017;Huang and Belongie 2017;Zhang, Zhu, and Zhu 2019;Jing et al 2020), and diversity (Wang et al 2020a;Chen et al 2021c). For interactive style transfer, (Gatys et al 2017) introduced user spatial control into (Gatys, Ecker, and Bethge 2016), which is further accelerated by (Lu et al 2017).…”
Section: Related Workmentioning
confidence: 99%
“…Liu et al [2021b] present an adaptive attention normalization module (AdaAttN) to consider both shallow and deep features for attention score calculation. GAN-based methods [Kotovenko et al 2019a,b;Sanakoyeu et al 2018a;Svoboda et al 2020;Zhu et al 2017] have been successfully used in collection style transfer, which considers style images in a collection as a domain [Chen et al 2021b;Lin et al 2021;].…”
Section: Related Workmentioning
confidence: 99%
“…We introduce DE with adversarial loss to enable the network to learn the style distribution Recent style transfer models employ GAN [Goodfellow et al 2014] to align the distribution of generated images with specific artistic images [Chen et al 2021b;Lin et al 2021]. The adversarial loss can enhance the holistic style of the stylization results, while it strongly relies on the distribution of datasets.…”
Section: Domain Enhancementmentioning
confidence: 99%