2022
DOI: 10.48550/arxiv.2204.02850
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Influence of Color Spaces for Deep Learning Image Colorization

Abstract: Colorization is a process that converts a grayscale image into a color one that looks as natural as possible. Over the years this task has received a lot of attention. Existing colorization methods rely on different color spaces: RGB, YUV, Lab, etc. In this chapter, we aim to study their influence on the results obtained by training a deep neural network, to answer the question: "Is it crucial to correctly choose the right color space in deep-learning based colorization?". First, we briefly summarize the liter… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 49 publications
0
3
0
Order By: Relevance
“…However, for the automatic image colorization task, the YUV and CIELAB color spaces (the last introduced by the International Commission on Illumination—CIE—in 1976) are mostly preferred, covering the entire range of human color perception. As recently demonstrated by Ballester et al [ 65 ], it cannot be concluded that one color space is always preferable in colorization applications, but the performance depends on the type of input images. For our Hyper-U-NET methodology, the L*a*b space, also used in the other methods tested in this work ( Section 2.4 ), was selected for Hyper-U-NET, applying some modifications needed to handle the historical input images.…”
Section: Proposed Methodsmentioning
confidence: 99%
“…However, for the automatic image colorization task, the YUV and CIELAB color spaces (the last introduced by the International Commission on Illumination—CIE—in 1976) are mostly preferred, covering the entire range of human color perception. As recently demonstrated by Ballester et al [ 65 ], it cannot be concluded that one color space is always preferable in colorization applications, but the performance depends on the type of input images. For our Hyper-U-NET methodology, the L*a*b space, also used in the other methods tested in this work ( Section 2.4 ), was selected for Hyper-U-NET, applying some modifications needed to handle the historical input images.…”
Section: Proposed Methodsmentioning
confidence: 99%
“…In the literature, three prior types lead to different colorization methods. Several surveys [7,1,3] provide a detailed overview of these approaches. In this work, we will focus on iColoriT: Towards Propagating Local Hint to the Right Region in Interactive Colorization by Leveraging Vision Transformer [14], one of the few hybrid approaches combining both color priors learned from a large dataset of images and color hints indicated by users.…”
Section: Introductionmentioning
confidence: 99%
“…They make use of the image itself to promote the selfoptimization learning of the relationship between features and expression, which is fast and accurate. In order to improve the accuracy of deep learning object detection models, researchers are currently thinking about combining the deep learning method with color space conversion [13]. Liu et al [14] converted an original potato RGB image to the HSL, HSV, Lab, XYZ, and YCrCb color spaces and then created Mask R-CNN models for each color space to detect wilt plaque on leaves.…”
Section: Introductionmentioning
confidence: 99%