Modelling the impact of a material's mesostructure on device level performance typically requires access to 3D image data containing all the relevant information to define the geometry of the simulation domain. This image data must include sufficient contrast between phases, be of high enough resolution to capture the key details, but also have a large enough 3D field‐of‐view to be representative of the material in general. It is rarely possible to obtain data with all of these properties from a single imaging technique. In this paper, we present a method for combining information from pairs of distinct but complementary imaging techniques in order to accurately reconstruct the desired multi‐phase, high‐resolution, representative, 3D images. Specifically, the deep convolutional generative adversarial networks to implement super‐resolution, style‐transfer and dimensionality expansion. It is believed that this data‐driven approach is superior to previously reported statistical material reconstruction methods, both in terms of its fidelity and ease of use. Furthermore, much of the data required to train this algorithm already exists in the literature, waiting to be combined. As such, our open‐source code could precipitate a step change in the materials sciences by generating the desired high quality image volumes necessary to simulate behaviour at the mesoscale.