2021
DOI: 10.1162/tacl_a_00368
|View full text |Cite
|
Sign up to set email alerts
|

EDITOR: An Edit-Based Transformer with Repositioning for Neural Machine Translation with Soft Lexical Constraints

Abstract: We introduce an Edit-Based TransfOrmer with Repositioning (EDITOR), which makes sequence generation flexible by seamlessly allowing users to specify preferences in output lexical choice. Building on recent models for non-autoregressive sequence generation (Gu et al., 2019), EDITOR generates new sequences by iteratively editing hypotheses. It relies on a novel reposition operation designed to disentangle lexical choice from word positioning decisions, while enabling efficient oracles for imitation learning and … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 20 publications
(6 citation statements)
references
References 32 publications
0
6
0
Order By: Relevance
“…We also calculated the mean number of iterations and the mean time required to translate a sentence. The latter measured the mean latency while the model generated a sequence using a single batch and a single Tesla P100 GPU, similar to Xu and Carpuat [38]. Latency includes the time for nearest-neighbor retrieval (for NeighborEdit).…”
Section: Discussionmentioning
confidence: 99%
“…We also calculated the mean number of iterations and the mean time required to translate a sentence. The latter measured the mean latency while the model generated a sequence using a single batch and a single Tesla P100 GPU, similar to Xu and Carpuat [38]. Latency includes the time for nearest-neighbor retrieval (for NeighborEdit).…”
Section: Discussionmentioning
confidence: 99%
“…In addition to the baseline transformer model, we perform further experiments for the latest transformer-based model (i.e., Edit-Based Transformer with Repositioning: EDITOR [44] ) to use as the second baseline model for comparison with our proposed models. The EDITOR is a non-autoregressive transformer model that iteratively edits hypotheses using a novel reposition operation.…”
Section: Methodsmentioning
confidence: 99%
“…The authors in [46] proposed the EDITOR and Levenshtein Transformer models for two-way translation of Thai–Myanmar, Thai–English, and Myanmar–English language pairs without using any lexical constraints, and they showed that the EDITOR model achieves better translation performance in English-to-Thai, Thai-to-English, and English-to-Myanmar translation pairs. Moreover, Xu and Carpuat [44] stated that EDITOR performs more effectively than the Levenshtein Transformer when utilizing the soft lexical constraints. Thus, we conduct additional experiments on the EDITOR model with soft lexical constraints for all language pairs and use it as the second baseline model to compare with our proposed models.…”
Section: Methodsmentioning
confidence: 99%
“…Later, EDITOR was proposed [14] which is a Non Auto-Regressive transformer (NAR) where the decoder layer is used to apply a sequence of edits on the initial input sequence. The sequence can be empty or has repositioning and insertion commands.…”
Section: Natural Language Simplificationmentioning
confidence: 99%