Affine transformation, layer blending, and artistic filters are popular processes that graphic designers employ to transform pixels of an image to create a desired effect. Here, we examine various approaches that synthesize new images: pixel-based compositing models and in particular, distributed representations of deep neural network models. This paper focuses on synthesizing new images from a learned representation model obtained from the VGG network. This approach offers an interesting creative process from its distributed representation of information in hidden layers of a deep VGG network i.e., information such as contour, shape, etc. are effectively captured in hidden layers of neural networks. Conceptually, if Φ is the function that transforms input pixels into distributed representations of VGG layers h, a new synthesized image X can be generated from its inverse function, X = Φ −1 (h). We describe the concept behind the approach, present some representative synthesized images and style-transferred image examples.