2021
DOI: 10.3390/app11146584
|View full text |Cite
|
Sign up to set email alerts
|

Comparative Analysis of Current Approaches to Quality Estimation for Neural Machine Translation

Abstract: Quality estimation (QE) has recently gained increasing interest as it can predict the quality of machine translation results without a reference translation. QE is an annual shared task at the Conference on Machine Translation (WMT), and most recent studies have applied the multilingual pretrained language model (mPLM) to address this task. Recent studies have focused on the performance improvement of this task using data augmentation with finetuning based on a large-scale mPLM. In this study, we eliminate the… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 8 publications
(5 citation statements)
references
References 22 publications
0
5
0
Order By: Relevance
“…The pre-training task does not have to be tied to QE. Large pre-trained language models have also been used for QE (Hu et al, 2020;Moura et al, 2020;Nakamachi et al, 2020;Eo et al, 2021). In contrast, our pre-training aims not only to acquire better sentence representations in general but specifically to acquire better sentence representations for translation quality score estimation.…”
Section: Related Workmentioning
confidence: 99%
“…The pre-training task does not have to be tied to QE. Large pre-trained language models have also been used for QE (Hu et al, 2020;Moura et al, 2020;Nakamachi et al, 2020;Eo et al, 2021). In contrast, our pre-training aims not only to acquire better sentence representations in general but specifically to acquire better sentence representations for translation quality score estimation.…”
Section: Related Workmentioning
confidence: 99%
“…The pre-training task does not have to be tied to QE. Large pre-trained language models have also been used for QE (Hu et al, 2020;Moura et al, 2020;Nakamachi et al, 2020;Eo et al, 2021). In contrast, our pre-training aims not only to acquire better sentence representations in general but specifically to acquire better sentence representations for translation quality score estimation.…”
Section: Related Workmentioning
confidence: 99%
“…After the deep neural network (also called deep learning) [17] became popular in various areas, such as machine translation [2,41], chatbot [42], unmanned aerial vehicle [43], person re-identification [44,45], multiple object tracking [46], image recognition [47,48] and signal processing [49], explainable AI received attention again. This time, people saw the power of deep learning and never doubted its ability.…”
Section: Deep Learningmentioning
confidence: 99%
“…Artificial intelligence (AI) is more intelligent than humans in a few fields, such as image recognition competitions, chess, Go, and intellectual television questions and answers [1][2][3][4][5][6]. Furthermore, lifelong machine learning (LML) [7][8][9][10][11][12] aims to make AI more effective [13][14][15].…”
Section: Introductionmentioning
confidence: 99%