2021 IEEE/CVF International Conference on Computer Vision (ICCV) 2021
DOI: 10.1109/iccv48922.2021.01411
|View full text |Cite
|
Sign up to set email alerts
|

Towards Vivid and Diverse Image Colorization with Generative Color Prior

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
44
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 86 publications
(44 citation statements)
references
References 45 publications
0
44
0
Order By: Relevance
“…The vast majority of colorization algorithms [ 9 , 10 , 11 , 12 , 13 , 14 , 15 , 16 , 17 , 18 , 19 , 20 , 21 ] use regression loss functions. Cheng et al [ 9 ] extracted image features using a CNN and combined bilateral filtering to enhance colorization.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…The vast majority of colorization algorithms [ 9 , 10 , 11 , 12 , 13 , 14 , 15 , 16 , 17 , 18 , 19 , 20 , 21 ] use regression loss functions. Cheng et al [ 9 ] extracted image features using a CNN and combined bilateral filtering to enhance colorization.…”
Section: Related Workmentioning
confidence: 99%
“…Su et al [ 19 ] cropped the objects in the image, constructed a multichannel CNN to color each object of the crop and the overall image, and fused multiple color images according to the weights to improve the colorization effect. Wu et al [ 20 ] used GANs to generate color images associated with grayscale images to guide the colorization of grayscale images. Jin et al [ 21 ] constructed a three-channel HistoryNet that contained image category, semantics, and colorization, using categorical and semantic information to guide colorization.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Similarly, [Zhao et al, 2018] use mean IoU of segmentation results on the PASCAL VOC2012 dataset [Everingham et al, 2012]. [Wu et al, 2021] use a no-reference measure called colourfulness score [Hasler and Süsstrunk, 2003] which incorporates the means and standard deviations of the a* and b* channels of CIEL*a*b* in a parametric model to compute a measure of how colourful the image is. The parameters were learned from data based on psychophysical experiments .…”
Section: Introductionmentioning
confidence: 99%
“…[Górriz et al, 2019] compare L 1 distance between convolutional features in the VGG19 model [Simonyan and Zisserman, 2015] for ground-truth and colourised samples. Similarly, [Lee et al, 2020] and [Wu et al, 2021] use Fréchet Inception Distance [Heusel et al, 2017], which requires comparing the inception score for colourisations versus ground-truth for 50K samples. [Zhang et al, 2018], developed a perception measure based on the features of deep neural networks called the Learned Perceptual Image Patch Similarity (LPIPS) metric, and this has also been used for the measure of colourisation in [Su et al, 2020, Yoo et al, 2019, Kim et al, 2021.…”
Section: Introductionmentioning
confidence: 99%