In response to the challenge that traditional camouflage design methods struggle to evade detection by modern unmanned aerial reconnaissance, we propose a camouflage pattern generation adversarial network model using color-semantic constraints. A reference image generation model using color-semantic constraints has been established, which through sentence encoding model, generates reference images with fundamental texture and color features. This is achieved by incorporating adversarial loss, texture loss, and pixel-level loss. We design a color standardization processing strategy based on the SimCLR framework. This model generates semantic camouflage images in batches by designing data augmentation strategies, positive-negative sample similarity measurement strategies, and sample structural similarity algorithms, regard to reference images. Qualitative and quantitative experimental results demonstrate that our proposed method exhibits strong camouflage performance in different environmental settings.