We present stego-texture, a unique texture synthesis method that allows users to deliver personalized messages with beautiful, decorative textures. Our approach was inspired by the success of recent work generating marbling textures using mathematical functions. We were able to transform an input image or a text message into an intricate texture by combining the seven basic, reversible functions provided in the system. Later, the input image or message could be recovered by reversing the process of these functions. During the design process, the parameters of operations were automatically recorded, encrypted and invisibly embedded into the final pattern to create a stego-texture. In this way, the receiver could extract the hidden message from the stego-texture without the need for extra information from the sender. To ensure that the delivered message is unnoticeably covered by the texture, we propose a new technique for automatically creating a background that is harmonious with the message based on a set of visual perception cues.
Electronic supplementary materialThe online version of this article (
Image vectorization is one of the primary means of creating vector graphics. The quality of a vectorized image depends crucially on extracting accurate features from input raster images. However, correct object edges can be difficult to detect when color gradients are weak. We present an image vectorization technique that operates on a color image augmented with a depth map and uses both color and depth edges to define vectorized paths. We output a vectorized result as a diffusion curve image. The information extracted from the depth map allows us more flexibility in the manipulation of the diffusion curves, in particular permitting high-level object segmentation. Our experimental results demonstrate that this method achieves high reconstruction quality and provides greater control in the organization and editing of vectorized images than existing work based on diffusion curves.
Chinese ink painting, also known as ink and wash painting, is a technically demanding art form. Creating Chinese ink paintings usually requires great skill, concentration, and years of training. This paper presents a novel real-time, automatic framework to convert images into Chinese ink painting style. Given an input image, we first construct its saliency map which captures the visual contents in perceptually salient regions. Next, the image is abstracted and its salient edges are calculated with the help of the saliency map. Then, the abstracted image is diffused by a non-physical ink diffusion process. After that, we combine the diffused image and the salient edges to obtain a composition image. Finally, the composition image is decolorized and texture advected to synthesize the resulting image with Chinese ink painting style. The whole pipeline is implemented on the GPU, enabling a real-time performance. We also propose some optional steps (foreground segmentation and image inversion) to improve the rendering quality. Experimental results show that our model is two to three orders of magnitude faster, while producing results comparable the ones obtained with the current image-based Chinese ink painting rendering method.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.