The virtual inpainting of artworks provides a nondestructive mode of hypothesis visualization, and it is especially attractive when physical restoration raises too many methodological and ethical concerns. At the same time, in Cultural Heritage applications, the level of details in virtual reconstruction and their accuracy are crucial. We propose an inpainting algorithm that is based on generative adversarial network, with two generators: one for edges and another one for colors. The color generator rebalances chromatically the result by enforcing a loss in the discretized gamut space of the dataset. This way, our method follows the modus operandi of an artist: edges first, then color palette, and, at last, color tones. Moreover, we simulate the stochasticity of the lacunae in artworks with morphological variations of a random walk mask that recreate various degradations, including craquelure. We showcase the performance of our model on a dataset of digital images of wall paintings from the Dunhuang UNESCO heritage site. Our proposals of restored images are visually satisfactory and they are quantitatively comparable to state-of-the-art approaches.