Neural machine translation (NMT) mainly comprises the encoder and decoder. The encoder is mainly used to extract the feature vector of the source language sentence. The decoder predicts the next token according to the feature vector extracted by the encoder and the information of the current moment. In this process, there is no guarantee that the features extracted by the encoder are indistinguishable from the meaning of the sentences in the source language. There is also no guarantee that the decoder can accurately predict the corresponding character. These issues can lead to over-translation and under-translation issues in the translated results. Previous researchers alleviated this problem by retranslating the features of the predicted target-side sentences into source-language sentences and calculating the gap between the generated source-language sentences and the source-language sentences. Inspired by this method, we propose to integrate a reconstructor and a post-editor into NMT during the training. The reconstructor takes the translation of NMT as input to reconstructs the source sentence, and the post-editor takes the translation as input and post-edits it to predict the target sentence. Through the training of the reconstructor and the post-editor, the semantics of the translation are forced to follow the source sentence and the target sentence. Experimental results show that our approach can effectively improve the performance of NMT on multiple translation tasks.
Neural machine translation models are guided by loss function to select source sentence features and generate results close to human annotation. When the data resources are abundant, neural machine translation models can focus on the features used to produce high-quality translations. These features include POS or other grammatical features. However, models cannot focus precisely on these features when data resources are limited. The reason is that the lack of samples makes the model overfit before considering these features. Previous works have enriched the features by integrating source POS or multitask methods. However, these methods only utilize the source POS or produce translations by introducing the generated target POS. We propose introducing POS information based on multitask methods and reconstructors. We obtain the POS tags by the additional encoder and decoder and compute the corresponding loss function. These loss functions are used with the loss function of machine translation to optimize the parameters of the entire model, which makes the model pay attention to POS features. The POS features focused on by models will guide the translation process and alleviate the problem that models cannot focus on the POS features in the case of low resources. Experiments on multiple translation tasks show that the method improves 0.4∼1 BLEU compared with the baseline model on different translation tasks.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.