A palimpsest is a historical manuscript in which the original text (termed under-text) was erased and overwritten with another script in order to recycle the parchment. One of the main challenges in studying palimpsests is to reveal the under-text. Due to the development of multi-spectral imaging, the original text can sometimes be recovered through material differences of inks and parchment (Easton et al., in: 2011 19th European signal processing conference, IEEE, 2011). However, generally, the revealed text can be observed only partially due to the overlap with newer text and degradation of the material. In this work, we propose revealing the under-text completely using deep generative networks, by leveraging prior spatial information of the under-text script. To optimize the under-text, we mimic the process of palimpsest creation. This is done by generating the under-text from a separately trained generative network to match it to the palimpsest image after mixing it with foreground text. The mixing process is represented by a separate neural network, that is optimized with the under-text image to match the original palimpsest. We also add an additional background generative network to compensate for the unevenness of the background. We propose a novel way of training the background generative network, that does not require isolated background samples and can use any region with layers of text. This paper illustrates the first known attempt to solve palimpsest text layer separation with deep generative networks. We evaluate our method performance on artificial and real palimpsest manuscripts by measuring character recognition and pixel-wise accuracy of the reconstructed under-text.