fashion image manipulation poses challenge in image transformation involving the integration of chosen clothing items into an input image traditional approaches typically rely on example images of the desired clothing design transferring them onto the target person a method known as virtual try-on in contrast this study delves into the realm of fashion image manipulation using textual descriptions offering advantages such as obviating the need for example images and enabling a broad spectrum of concepts through text however existing text-based editing techniques often face limitations due to the requirement for extensively annotated training datasets or their restricted capability to handle simple text descriptions to address these challenges we propose fashiongen fashion image rygeneration via text an innovative text-based manipulation model fashiongen augments the conventional gan-inversion by incorporating semantic pose-related and image-level constraints to generate desired images leveraging pretrained clip models fashiongen effectively imposes targeted semantics furthermore we introduce a latent-code regularization technique to enhance control over image fidelity and ensure synthesis from a well-defined latent space comprehensive experiments conducted on a dataset amalgamating viton images and fashion-gen text descriptions alongside comparisons with existing editing methods affirm fashiongens proficiency in generating realistic design images with superior transformation performance