“…Significant progress has been made in abstractive summarization with large pre-trained Transformers (Dong et al, 2019;Lewis et al, 2020;Zhang et al, 2019;Raffel et al, 2019;Song et al, 2019). However, style-controlled summarization is much less studied (Chandrasekaran et al, 2020), and two key challenges have been identified: (1) lack of parallel data, and (2) expensive (re)training, e.g., separate summarizers must be trained or finetuned for a pre-defined set of styles (Zhang et al, 2018). Both challenges call for inference time methods built upon trained summarization models, to adjust styles flexibly and efficiently.…”