Proceedings of the Fourth Workshop on Neural Generation and Translation 2020
DOI: 10.18653/v1/2020.ngt-1.11
|View full text |Cite
|
Sign up to set email alerts
|

Leveraging Sentence Similarity in Natural Language Generation: Improving Beam Search using Range Voting

Abstract: We propose a method for natural language generation, choosing the most representative output rather than the most likely output. By viewing the language generation process from the voting theory perspective, we define representativeness using range voting and a similarity measure. The proposed method can be applied when generating from any probabilistic language model, including n-gram models and neural network models. We evaluate different similarity measures on an image captioning task and a machine translat… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
14
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
6
3

Relationship

2
7

Authors

Journals

citations
Cited by 13 publications
(14 citation statements)
references
References 38 publications
0
14
0
Order By: Relevance
“…We argue this is another piece of evidence for the inadequacy of the mode: by using beam search, they emphasise statistics of high-scoring translations, potentially rare and inadequate ones. Very recently, Borgeaud and Emerson (2020) present a voting-theory perspective on decoding for image captioning and machine translation. Their proposal is closely-related to MBR, but motivated differently.…”
Section: Related Workmentioning
confidence: 99%
“…We argue this is another piece of evidence for the inadequacy of the mode: by using beam search, they emphasise statistics of high-scoring translations, potentially rare and inadequate ones. Very recently, Borgeaud and Emerson (2020) present a voting-theory perspective on decoding for image captioning and machine translation. Their proposal is closely-related to MBR, but motivated differently.…”
Section: Related Workmentioning
confidence: 99%
“…They propose using minimum Bayes risk decoding, which leverages the whole distribution rather than only its mode, and can outperform vanilla beam search in low-resource scenarios. Borgeaud and Emerson (2019), in a similar vein, develop an additional voting-based step on top of beam search to select more representative sequences, based on similarity measures.…”
Section: Related Workmentioning
confidence: 99%
“…For open-ended language generation, Holtzman et al (2020) claim that decoding strategies that op-timize for output with high probability (like beam search) lead to highly deteriorated texts, since the highest scores are often assigned to generic, incoherent, and repetitive sequences. Several works propose reranking strategies on the set of hypotheses produced by the beam search following different criteria (Dušek and Jurčíček, 2016;Blain et al, 2017;Agarwal et al, 2018;Borgeaud and Emerson, 2020;Hargreaves et al, 2021) to improve both the performance on a given task and the quality of the output. In this work, we present a cognitivelyinspired reranking technique for a visual dialogue questioner agent.…”
Section: Related Workmentioning
confidence: 99%