Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing 2021
DOI: 10.18653/v1/2021.emnlp-main.662
|View full text |Cite
|
Sign up to set email alerts
|

Machine Translation Decoding beyond Beam Search

Abstract: Beam search is the go-to method for decoding auto-regressive machine translation models. While it yields consistent improvements in terms of BLEU, it is only concerned with finding outputs with high model likelihood, and is thus agnostic to whatever end metric or score practitioners care about. Our aim is to establish whether beam search can be replaced by a more powerful metric-driven search technique. To this end, we explore numerous decoding algorithms, including some which rely on a value function paramete… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
11
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
6
3

Relationship

0
9

Authors

Journals

citations
Cited by 14 publications
(11 citation statements)
references
References 8 publications
0
11
0
Order By: Relevance
“…Planning algorithms like MCTS have also been used to find the optimal text outputs for different natural language processing (NLP) tasks. For example, Scialom et al (2021); Leblond et al (2021); Chaffin et al (2022) use pre-trained discriminators or pre-defined metrics as reward functions. We want to emphasize that we are the first to combine a tree search algorithm with large language models for general-purpose programming language generation.…”
Section: Related Workmentioning
confidence: 99%
“…Planning algorithms like MCTS have also been used to find the optimal text outputs for different natural language processing (NLP) tasks. For example, Scialom et al (2021); Leblond et al (2021); Chaffin et al (2022) use pre-trained discriminators or pre-defined metrics as reward functions. We want to emphasize that we are the first to combine a tree search algorithm with large language models for general-purpose programming language generation.…”
Section: Related Workmentioning
confidence: 99%
“…While the energy model plays a similar role to a QE system, our work differs in two ways: we use an existing, pretrained QE model instead of training a dedicated reranker, making our approach applicable to any MT system without further training; and the QE model is trained to predict human as- sessments, rather than BLEU scores. Leblond et al (2021) compare a reinforcement learning approach to reranking approaches (but not MBR decoding, as we do). They investigate the use of reference-based metrics and, for the reward function, a referencefree metric based on a modified BERTScore (Zhang et al, 2020).…”
Section: Related Workmentioning
confidence: 99%
“…In neural text generation the scoring function frequently requires the computation of a neural network forward step, such as a log-probability computed using an autoregressive model, rendering these practices prohibitive. One option is to rely on a heuristic to generate samples to train a neural network that estimates v (Leblond et al, 2021). However this has been shown to be challenging as model scores are difficult to estimate.…”
Section: Adaptive Tree Search For Text Generationmentioning
confidence: 99%