Nowadays, pre-trained sequence-to-sequence models such as BERTSUM and BART have shown state-of-the-art results in abstractive summarization. In these models, during fine-tuning, the encoder transforms sentences to context vectors in the latent space and the decoder learns the summary generation task based on the context vectors. In our approach, we consider two clusters of salient and non-salient context vectors, using which the decoder can attend more to salient context vectors for summary generation. For this, we propose a novel clustering transformer layer between the encoder and the decoder, which first generates two clusters of salient and non-salient vectors, and then normalizes and shrinks the clusters to make them apart in the latent space. Our experimental result shows that the proposed model outperforms the existing BART model by learning these distinct cluster patterns, improving up to 4% in ROUGE and 0.3% in BERTScore on average in CNN/DailyMail and XSUM data sets.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.