Techniques of using convolutional neural networks (CNNs) to colorize monochrome still images have been widely researched. However, the results of automatic colorization are often different from the user's intentions and historical fact. A lot of color correction work still needs to be done in order to produce a colorized video. This is a major problem in situations such as broadcasting production where footage must be appropriately colorized in accordance with historical fact. In this article, we propose a practical video colorization framework that can easily reflect the user's intentions. The proposed framework uses a combination of two CNNs-a user-guided still-imagecolorization CNN and a color-propagation CNN-that allows the correction work to be performed efficiently. The user-guided still-image-colorization CNN produces key frames by colorizing several monochrome frames from the target video on the basis of user-specified colors and color-boundary information. The colorpropagation CNN automatically colorizes the entire video on the basis of the key frames, while suppressing discontinuous changes in color between frames. A quantitative evaluation showed that it is possible to produce color video reflecting the user's intention with less effort than with earlier methods.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.