Interaction paradigms based on 3D interfaces and virtual reality offer new possibilities to overcome the limitations of query by example. We present a system that lets users navigate a 3D world where they can take photographs to query a database of images by content. Furthermore, users can interactively customize the virtual world by adding objects to the scene and editing object properties such as colors and textures. T he emergence of multimedia technology and the possibility of sharing and distributing image data through largebandwidth computer networks have contributed to an increase of visual data in the global information exchange. Significant advances have been made in the development of efficient compression techniques, but techniques that enable efficient retrieval by content of visual data remain an active research topic. Recently, researchers have developed new tools and interaction paradigms to search for visual information by referring directly to its content. Visual elements such as color, texture, shape, structure, and spatial relationships serve as clues for retrieving images with similar content. The most successful and commonly employed querying interfaces rely on query-by-example paradigms. These paradigms require users to sketch, either from scratch or using prototype images, the content of the image they're looking for. Although it's easy to create simple examples, producing significant examples of complex scenes remains a difficult task. The photographer's metaphor presented in this article facilitates authoring complex examples. Actually, the user takes a photograph of a (customizable) 3D environment rendered by the system. Since the system renders the environment, the paradigm's effectiveness doesn't rely on the user's painting abilities.