A variety of image generation methods have emerged in recent years, notably DALL-E 2, Imagen, and Stable Diffusion. While they have been shown to be capable of producing photorealistic images from text prompts facilitated by generative diffusion models conditioned on language input, their capacity for materials design has not yet been explored. Here we use a trained Stable Diffusion model and consider it as an experimental system, examining its capacity to generate novel material designs especially in the context of 3D material architectures. We demonstrate that this approach offers a paradigm to generate diverse material patterns and designs, using human-readable language as input, allowing us to explore a vast nature-inspired design portfolio for both novel architectured materials and granular media. We present a series of methods to translate 2D representations into 3D data, including movements through noise spaces via mixtures of text prompts, and image conditioning. We create physical samples using additive manufacturing, and assess material properties of materials designed via a coarse-grained particle simulation approach. We present case studies using images as starting point for material generation; exemplified in two applications. First, a design for which we use Haeckel’s classic lithographic print of a diatom, which we amalgamate with a spider web. Second, a design that is based on the image of a flame, amalgamating it with a hybrid of a spider web and wood structures. These design approaches result in complex materials forming solids or granular liquid-like media that can ultimately be tuned to meet target demands.