Keyphrase generation is a long-standing task in scientific literature retrieval. The Transformer-based model outperforms other baseline models in this challenge dramatically. In cross-domain keyphrase generation research, topic information plays a guiding role during generation, while in keyphrase generation of individual text, titles can replace topic roles and convey more semantic information. As a result, we proposed an enhanced model architecture named TAtrans. In this research, we investigate the advantages of title attention and sequence code representing phrase order in keyphrase sequence in improving Transformer-based keyphrase generation. We conduct experiments on five widely-used English datasets specifically designed for keyphrase generation. Our method achieves an F1 score in the top five, surpassing the Transformer-based model by 3.2% in KP20k. The results demonstrate that the proposed method outperforms all the previous models on prediction present keyphrases. To evaluate the performance of the proposed model in the Chinese dataset, we construct a new Chinese abstract dataset called CNKIL, which contains a total of 54,546 records. The F1 score of the top five for predicting present keyphrases on the CNKIL dataset exceeds 2.2% compared to the Transformer-based model. However, there is no significant improvement in the model’s performance in predicting absent keyphrases.