2017
DOI: 10.1145/3099556
|View full text |Cite
|
Sign up to set email alerts
|

Translating Low-Resource Languages by Vocabulary Adaptation from Close Counterparts

Abstract: Some natural languages belong to the same family or share similar syntactic and/or semantic regularities. This property persuades researchers to share computational models across languages and benefit from high-quality models to boost existing low-performance counterparts. In this article, we follow a similar idea, whereby we develop statistical and neural machine translation (MT) engines that are trained on one language pair but are used to translate another language. First we train a reliable model for a hig… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
17
0
1

Year Published

2018
2018
2021
2021

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 18 publications
(18 citation statements)
references
References 23 publications
0
17
0
1
Order By: Relevance
“…This means learning some of its characteristics (e.g. syntax and morphology), and transferring them to the low-resource MT model [14,20,21,26]. These approaches do not necessarily use parallel texts of the low-resource language as they rely on the resource-rich language, which could result in a poor coverage of the morphology of the low-resource language.…”
Section: Related Workmentioning
confidence: 99%
“…This means learning some of its characteristics (e.g. syntax and morphology), and transferring them to the low-resource MT model [14,20,21,26]. These approaches do not necessarily use parallel texts of the low-resource language as they rely on the resource-rich language, which could result in a poor coverage of the morphology of the low-resource language.…”
Section: Related Workmentioning
confidence: 99%
“…In the encoder-decoder architecture which was discussed by Peyman [24], two recurrent neural networks (RNNs) are trained together to maximize the conditional probability of a target sequence (candidate translation) y = y 1 , ... y m , given a source sentence x = x 1 , ... x n . Input words are sequentially processed consecutively until the end of the input string is reached.…”
Section: Neural Machine Translationmentioning
confidence: 99%
“…In the encoder-decoder architecture which was discussed by Peyman et al [21], two recurrent neural networks (RNNs) are trained together to maximize the conditional probability of a target sequence (candidate translation) y = y 1 ,…, y m , given a source sentence x = x 1 ,…, x n . Input words are sequentially processed consecutively until the end of the input string is reached.…”
Section: Neural Machine Translation (Nmt)mentioning
confidence: 99%