SIGGRAPH ASIA 2016 Technical Briefs 2016
DOI: 10.1145/3005358.3005375
|View full text |Cite
|
Sign up to set email alerts
|

Deep patch-wise colorization model for grayscale images

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
7
0

Year Published

2017
2017
2022
2022

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 9 publications
(7 citation statements)
references
References 9 publications
0
7
0
Order By: Relevance
“…This algorithm was evaluated with the help of human participants asking them to distinguish between colorized and ground-truth images. In [24], the authors introduced a patch-based colorization model using two different loss functions in a vectorized Convolutional Neural Network framework. During colorization patches are extracted from the image and are colorized independently.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…This algorithm was evaluated with the help of human participants asking them to distinguish between colorized and ground-truth images. In [24], the authors introduced a patch-based colorization model using two different loss functions in a vectorized Convolutional Neural Network framework. During colorization patches are extracted from the image and are colorized independently.…”
Section: Related Workmentioning
confidence: 99%
“…As pointed out in many papers [20], [23], [24], [26], Euclidean loss function is not an optimal solution because it will result in the so-called averaging problem. Namely, the system will produce grayish sepia tone effects.…”
Section: Our Approachmentioning
confidence: 99%
“…This algorithm was evaluated with the help of human participants asking them to distinguish between colorized and ground-truth images. In [16], the authors introduced a patch-based colorization model using two different loss functions in a vectorized Convolutional Neural Network framework. During colorization patches are extracted from the image and colorized independently.…”
Section: Related Workmentioning
confidence: 99%
“…AlexNet got 16% of errors in ImageNet call. In the next couple of years, VG 19 [2] with 19 layers and GoogleNet [3] with 22 layers reduced the error rate to a few percent.…”
Section: Introductionmentioning
confidence: 99%