SUMMARYImage colorization is a semi-automatic process of adding colors to monochrome images and videos. Using existing methods, required human assistance can be limited to annotating the image with color scribbles or selecting a reference image, from which the colors are transferred to a source image or video sequence. In the work reported here we have explored how to exploit the textural information to improve this process. For every scribbled image we determine the discriminative textural feature domain. After that, the whole image is projected onto the feature space, which makes it possible to estimate textural similarity between any two pixels. For single image colorization based on a set of color scribbles, our contribution lies in using the proposed feature space domain rather than the luminance channel. In case of color transfer used for colorization of video sequences, the feature space is generated based on a reference image, and textural similarity is used to match the pixels between the reference and source images. We have conducted extensive experimental validation which confirmed the importance of using textural information and demonstrated that our method significantly improves colorization result.