Text generation task has drawn an increasing attention in the recent years. Recurrent Neural Networks (RNN) achieved great results in this task. There are several parameters and factors that may affect the performance of the recurrent neural networks, that is why text generation is a challenging task, and requires a lot of tuning. This study investigates the impact of three factors that affect the quality of generated text: 1) data source and domain, 2) RNN architecture, 3) named Entities normalization. We conduct several experiments using different RNN architectures (LSTM and GRU), and different datasets (Hulu and booking). Evaluating generated texts is a challenging task. There is no perfect metric judge the quality and the correctness of the generated texts. We use different evaluation metrics to evaluate the performance of the generation models. These metrics include the training loss, the perplexity, the readability, and the relevance of the generated texts. Most of the related works do not consider all these evaluation metrics to evaluate text generation. The results suggest that GRU outperforms LSTM network, and models trained on booking set is better than the ones that trained on Hulu dataset.