“…The former investigates how people perceive different styles of explanation [8,9,19], while the latter provides explanations by designing new explainable recommendation algorithms, to which our work is more related. There exist various types of explanation style, such as pre-defined templates [24,49,60], item features [20,54], ranked text [5,12,27], image visualizations [10], knowledge graph paths [1,18,55,56], and reasoning rules [7,46,63], but in this work we focus on generating natural language explanations. Previous works [6,13,26,57] mostly rely on RNN, e.g., LSTM [22] and GRU [14], or unpretrained Transformer [29], leaving the potentially more effective pre-trained models under-explored, which motivates this work.…”