2020
DOI: 10.32604/jai.2020.010476
|View full text |Cite
|
Sign up to set email alerts
|

Improve Neural Machine Translation by Building Word Vector with Part of Speech

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
7
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
1

Relationship

1
4

Authors

Journals

citations
Cited by 9 publications
(7 citation statements)
references
References 4 publications
0
7
0
Order By: Relevance
“…At the same time, in order to prevent p n from becoming higher as the sentence length becomes longer, the length penalty factor Brevity Penalty (BP) 2023, vol.74, no.2 is introduced in the calculation of BLEU, and the calculation process is shown in Eq. (17).…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…At the same time, in order to prevent p n from becoming higher as the sentence length becomes longer, the length penalty factor Brevity Penalty (BP) 2023, vol.74, no.2 is introduced in the calculation of BLEU, and the calculation process is shown in Eq. (17).…”
Section: Discussionmentioning
confidence: 99%
“…Although neural machine translation has made great achievements, there are also translations that are fluent but not faithful enough [15], difficult to process rare words, poor performance in low-resource languages, poor cross-domain adaptability, low utilization of prior knowledge, mistranslations and missed translations [16], etc. Inspired by the classic statistical machine translation research, it has become a hot topic in the field of neural machine translation research that using existing linguistic knowledge, incorporating linguistic information into the neural machine translation model [17], alleviating the inherent difficulties faced by neural machine translation, and improving translation quality [18]. Among these issues, this paper has carried out a research which focusing on mistranslations and omissions.…”
Section: Introductionmentioning
confidence: 99%
“…In recent years, graph convolutional neural networks (GCN) can achieve end-to-end learning of node and structural features, showing advantages over other traditional models in the prosperous relationship of graph data [40]. This paper uses GCN models to explore the effectiveness of multi-label classification tasks in attack behavior extraction, which focuses on using hierarchy textual relational networks to identify attack behaviors in cyber threat intelligence and GCN models to fuse textual relational features.…”
Section: Constructing Hierarchy Relationship Of Heterogeneous Textual...mentioning
confidence: 99%
“…Processing text features and video features in parallel by different models in two major directions is a reasonable solution. Zhang et al [17] proposed a new word vector training method based on parts of speech (POS) features for distinguishing the same words under different discourses and improving the quality of problematic text translation. Zhang et al [18] proposed a motion-blurred image restoration method based on joint invertibility of Point Spread Functions (PSFs) to solve the iterative restoration problem by the joint solution of multiple images in spatial domain.…”
Section: Video Question and Answermentioning
confidence: 99%
“…A novel multi-headed attention mapping network was proposed by Zhang et al [32] to extract deeper overall relationships. Yang et al [17] performed image noticing multiple times in a stacked fashion and gradually inferred the answers. Xu et al [33] used a multi-hop image attention mechanism to capture fine-grained information from the question text.…”
Section: Attention Mechanismmentioning
confidence: 99%