The generative adversarial network is often used for image conversion tasks such as image coloring, semantic composition,style transfer, etc.,but at this stage, the training of image generation models often depends on a large number of paired datasets, and can only achieve conversion between two image domains.When processing tasks in more than two domains,it lacks scalability and robustness.To solve the above problems,this paper proposes a Content and Style transfer model based on Generative Adversarial Network (CS-GAN).This model can fuse style features (such as monet style,cubism) and content features (such as color ,texture) of fashion items on unpaired datasets at the same time,which can realize the conversion of multiple image domains,so as to effectively complete the task of transferring the content and style of fashion items.In particular,we propose a layer consistent dynamic convolution (LCDC) method,which encodes the style image as a learnable convolution parameter,which can adaptively learn style features,and more flexibly and efficiently complete the arbitrary style transfer of fashion items.To validate the performance of our model,we conducts comparative experiments and results analysis on the public fashion dataset. Compared with other mainstream methods,this method has improved in image synthesis quality,Inception Score (IS) and Frechet Inception Dinstance score (FID) evaluation index.