The study of ancient writings has great value for archaeology and philology. Essential forms of material are photographic characters, but manual photographic character recognition is extremely timeconsuming and expertise-dependent. Automatic classification is therefore greatly desired. However, the current performance is limited due to the lack of annotated data. Data generation is an inexpensive but useful solution to data scarcity. Nevertheless, the diverse glyph shapes and complex background textures of photographic ancient characters make the generation task difficult, leading to unsatisfactory results of existing methods. To this end, we propose an unsupervised generative adversarial network called AGTGAN in this paper. By explicitly modeling global and local glyph shape style, followed by a stroke-aware texture transfer and an associate adversarial learning mechanism, our method can generate characters with diverse glyphs and realistic textures. We evaluate our method on photographic ancient character datasets, e.g., OBC306 and CSDD. Our method outperforms other state-of-the-art methods in terms of various metrics and performs much better in terms of the diversity and authenticity of generated samples. With our generated images, experiments on the largest photographic oracle bone character dataset show that our method can achieve a significant increase in classification accuracy, up to 16.34%. The source code is available at https://github.com/Hellomystery/AGTGAN.