INTRODUCTION: Machine translation is a modern natural language processing research field with important scientific and practical significance. In practice, the variation of languages, the limitation of semantic knowledge, and the lack of parallel language resources limit the development of machine translation.
OBJECTIVES: This paper aims to avoid duplicating neural networks during the learning process and improve the ability to generalize complex neural network machine translation models with limited resources.
METHODS: Textual material in the source language was studied, and a suitable textual material representation model was used to express complex, high-level, and abstract semantic information. Then, a more efficient neural network machine translation integration model was developed based on the control of written data and algorithms.
RESULTS: Data mining must be applied to complex neural network machine translation systems based on transfer learning to standardize finite neural network models.
CONCLUSION: Neural network-based embedded machine translation systems based on migration training require a small number of labelled samples to improve the system's permeability. However, this adaptive migration learning region approach can easily lead to over-learning problems in neural network machine translation models, thus avoiding excessive correspondences during the learning process and improving the generalization ability of the translation model with limited neural network resources.