2019
DOI: 10.1609/aaai.v33i01.33017297
|View full text |Cite
|
Sign up to set email alerts
|

Graph Based Translation Memory for Neural Machine Translation

Abstract: A translation memory (TM) is proved to be helpful to improve neural machine translation (NMT). Existing approaches either pursue the decoding efficiency by merely accessing local information in a TM or encode the global information in a TM yet sacrificing efficiency due to redundancy. We propose an efficient approach to making use of the global information in a TM. The key idea is to pack a redundant TM into a compact graph and perform additional attention mechanisms over the packed graph for integrating the T… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
38
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
1
1

Relationship

1
5

Authors

Journals

citations
Cited by 33 publications
(38 citation statements)
references
References 13 publications
0
38
0
Order By: Relevance
“…2 We can see that our results significantly outperform previous arts. Notably, our best model (model #5) surpasses the best reported model (Xia et al, 2019) by 1.69 BLEU points in average and up to 2.9 BLEU points (De⇒En). This result verifies the effectiveness of our proposed models.…”
Section: Contrast To Previous Bilingual Tm Systemsmentioning
confidence: 77%
See 4 more Smart Citations
“…2 We can see that our results significantly outperform previous arts. Notably, our best model (model #5) surpasses the best reported model (Xia et al, 2019) by 1.69 BLEU points in average and up to 2.9 BLEU points (De⇒En). This result verifies the effectiveness of our proposed models.…”
Section: Contrast To Previous Bilingual Tm Systemsmentioning
confidence: 77%
“…Bulte and Tezcan (2019) and Xu et al (2020) used fuzzy-matching with translation memories and augment source sequences with retrieved source-target pairs. Xia et al (2019) directly ignored the source side of a TM and packed the target side into a compact graph. Khandelwal et al (2020) ran existing translation model on large bi-text corpora and recorded all hidden states for later nearest neighbor search at each decoding step, which is very compute-intensive.…”
Section: Related Workmentioning
confidence: 99%
See 3 more Smart Citations