The synthesis of facial images from textual descriptions is a relatively difficult subfield of text-to-image synthesis. It is applicable in various domains like Forensic Science, Game Development, Animation, Digital Marketing, and Metaverse. However, no work was found that generates facial images from textual descriptions in Bangla; the 5th most spoken language in the world. This research introduces the first-ever system to generate facial images from Bangla textual descriptions. The proposed model comprises two fundamental constituents, namely a textual encoder, and a Generative Adversarial Network(GAN). The text encoder is a pre-trained Bangla text encoder named Bangla FastText which is employed to transform Bangla text into a latent vector representation. The utilization of Deep Convolutional GAN (DCGAN) allows for the generation of face images that correspond to text embedding. Furthermore, a Bangla version of the CelebA dataset, CelebA Bangla is created for this study to develop the proposed system. CelebA Bangla contains images of celebrities, their corresponding annotated Bangla facial attributes and Bangla Textual Descriptions generated using a novel description generation algorithm. The proposed system attained a Fréchet Inception Distance (FID) score of 126.708, Inception Score(IS) of 12.361, and Face Semantic Distance(FSD) of 20.23. The novel text embedding strategy used in this study outperforms prior work. A thorough qualitative and quantitative analysis demonstrates the superior performance of the proposed system over other experimental systems.