“…We are particularly interested in leveraging BERT for better sentence quality and diversity estimates. This paper extends on previous work (Cho et al, 2019) by incorporating deep contextualized representations into DPP, with an emphasis on better sentence selection for extractive multi-document summarization. The major research contributions of this work include the following: (i) we make a first attempt to combine DPP with BERT representations to measure sentence quality and diversity and report encouraging results on benchmark summarization datasets; (ii) our findings suggest that it is best to model sentence quality, i.e., how important a sentence is to the summary, by combining semantic representations and surface indicators of the sentence, whereas pairwise sentence dissimilarity can be determined by semantic repre-sentations only; (iii) our analysis reveals that combining contextualized representations with surface features (e.g., sentence length, position, centrality, etc) remains necessary, as deep representations, albeit powerful, may not capture domain-specific semantics/knowledge such as word frequency.…”