A new style transfer-based image manipulation framework combining generative networks and style transfer networks is presented in this paper. Unlike conventional style transfer tasks, we tackle a new task, text-guided image manipulation. We realize style transfer-based image manipulation that does not require any reference style images and generate a style image from the user's input sentence. In our method, since an initial reference input sentence for a content image can automatically be given by an image-to-text model, the user only needs to update the reference sentence. This scheme can help users when they do not have any images representing the desired style. Although this text-guided image manipulation is a new challenging task, quantitative and qualitative comparisons showed the superiority of our method.INDEX TERMS Style transfer, image manipulation, text-to-image synthesis, aesthetic analysis, generative model.