Text-to-image (T2I) generation techniques have shown promising results using deep learning models. The recent T2I methods mainly use complex and stacked structures to maintain text-image consistency or focus on the generation of global text information while overlooking the finer details. In addition, brocade images differ from real-world images in various aspects, such as their fine texture and complex pattern elements (totem elements). To solve the problems, we first build a text-image dataset that specifically focuses on Chinese brocade. Moreover, by developing an extra pattern discriminator and the multi-scale feature fusion module, we proposed the brocade dual-discriminator generative adversarial network (BDD-GAN). BDD-GAN addresses the challenges associated with text-image consistency, as well as capturing the intricate textures and unique patterns in Chinese brocade-generated images. To fully verify the performance and effectiveness of the proposed method, we provided ablation studies and comparisons with previous works. Experimental results indicate that our method is capable of synthesizing Chinese brocade images that are higher in quality and totem details than those produced by previous algorithms.