2020
DOI: 10.1155/2020/6140153
|View full text |Cite
|
Sign up to set email alerts
|

A Joint Back-Translation and Transfer Learning Method for Low-Resource Neural Machine Translation

Abstract: Neural machine translation (NMT) for low-resource languages has drawn great attention in recent years. In this paper, we propose a joint back-translation and transfer learning method for low-resource languages. It is widely recognized that data augmentation methods and transfer learning methods are both straight forward and effective ways for low-resource problems. However, existing methods, which utilize one of these methods alone, limit the capacity of NMT models for low-resource problems. In order to make f… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
5
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(5 citation statements)
references
References 23 publications
0
5
0
Order By: Relevance
“…The goal of the fine-tuning phase is to further optimize the performance of the translation task for low-resource languages on the basis of the pre-trained model. Specifically, the selected pre-trained model is loaded with a bilingual parallel dataset of the target language for fine-tuning: (3) Hierarchical fine-tuning: first fine-tuning within a large family of languages, and then targeted finetuning to target low-resource languages, which helps to gradually focus on more specific and scarce language features [22].…”
Section: Fine-tuning Strategies and Low-resource Translation Performa...mentioning
confidence: 99%
“…The goal of the fine-tuning phase is to further optimize the performance of the translation task for low-resource languages on the basis of the pre-trained model. Specifically, the selected pre-trained model is loaded with a bilingual parallel dataset of the target language for fine-tuning: (3) Hierarchical fine-tuning: first fine-tuning within a large family of languages, and then targeted finetuning to target low-resource languages, which helps to gradually focus on more specific and scarce language features [22].…”
Section: Fine-tuning Strategies and Low-resource Translation Performa...mentioning
confidence: 99%
“…NMT has the challenge of handling rare words. Researchers have also tried to address this problem and achieved a satisfactory result as well [16].…”
Section: ( )mentioning
confidence: 99%
“…These searching strategies with hyper-parameters to train a model to achieve maximum accuracy with its new set of trained parameters are called hyper-parameter optimization techniques. However, in NMT, apart from hyper-parameter optimization, there are other approaches such as linguistic features, representation of abstract meaning on semantic graphs, and various other novel approaches used by researchers to improve the performance of NMT systems, specifically when using low-resource language pairs [15][16][17][18]. The aims of this paper are to:…”
Section: Introductionmentioning
confidence: 99%
“…Finding and fine tuning the network model of classifiers suitable for specific data sets is one of these methods, such as [ 25 ] describing the proposed approach for text classification in an unbalanced data environment based on an implementing individual LSTM neural network. Data augmentation is also a frequently used way in scenarios with few samples or unbalanced categories [ 26 29 ], such as back translation [ 26 ], which has become an effective way of data augmentation; for example, researchers translate some English text into Chinese and then translate them back into English, they can get various new training data, and the size of the dataset is doubled. Oversampling [ 27 ] and undersampling [ 28 ] are both common methods to deal with unbalanced datasets.…”
Section: Introductionmentioning
confidence: 99%