“…The former (Gedikli et al, 2014;Chen and Wang, 2017;Chen et al, 2019b) investigates how people perceive different styles of explanations, while the latter provides explanations by designing new explainable recommendation algorithms, to which our work is more related. There exist various types of explanation styles, such as pre-defined templates (Zhang et al, 2014;Li et al, 2020a), ranked sentences (Chen et al, 2019d;, image visualizations (Chen et al, 2019c), knowledge graph paths Xian et al, 2019;, reasoning rules (Shi et al, 2020;Zhu et al, 2021), etc., among which, recently, generated natural language explanations (Ni et al, 2019;Li et al, 2020c) have received much attention, mainly owing to the advancement of natural language generation technology and the availability of textual data on recommendation platforms such as e-commerce. However, previous works mostly rely on recurrent neural networks (RNN), e.g., LSTM (Hochreiter and Schmidhuber, 1997) and GRU (Cho et al, 2014), leaving the potentially more effective Transformer under-explored, which motivates this work.…”