Text to face generation is a sub domain of text to image synthesis, and it has a huge impact along with the wide range of applications on public safety domain. Currently, due to the lack of dataset, the research work focused on the face to text generation is very limited. Most of the work for text to face generation till now based on the partially trained generative adversarial network, in which the pre-trained text encoder has been used to extract the sematic features of input sentence. Then these semantic features have been utilized to train the image decoder. But in this research work, we have proposed the fully trained generative adversarial network to generate the realistic and natural images. We have trained the text encoder as well as the image decoder at the same time to generate the more accurate and efficient results. In addition to proposed methodology, we have also generate the dataset by the amalgamation of LFW, CelebA and locally prepared dataset. We have also labelled the images according to our defined classes. Through performing different kind of experiments, we have proved that our proposed fully trained GAN outperformed by generating the good quality images with accordance to the input sentence. Moreover, the visual results have also strengthened our experiments by generating the face images according to the given query.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.