Proceedings of the Third Conference on Machine Translation: Shared Task Papers 2018
DOI: 10.18653/v1/w18-6465
|View full text |Cite
|
Sign up to set email alerts
|

Alibaba Submission for WMT18 Quality Estimation Task

Abstract: The goal of WMT 2018 Shared Task on Translation Quality Estimation is to investigate automatic methods for estimating the quality of machine translation results without reference translations. This paper presents the QE Brain system, which proposes the neural Bilingual Expert model as a feature extractor based on conditional target language model with a bidirectional transformer and then processes the semantic representations of source and the translation output with a Bi-LSTM predictive model for automatic qu… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
33
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 35 publications
(36 citation statements)
references
References 12 publications
0
33
0
Order By: Relevance
“…Different from most previous quality estimation studies that require feature extraction (Blatz et al, 2004;Specia et al, 2009;Salehi et al, 2014) or post-edited data (Kim et al, 2017;Wang et al, 2018;Ive et al, 2018) to train external confidence estimators, all our approach needs is the NMT model itself. Hence, it is easy to apply our approach to arbitrary NMT models trained for arbitrary language pairs.…”
Section: Introductionmentioning
confidence: 99%
“…Different from most previous quality estimation studies that require feature extraction (Blatz et al, 2004;Specia et al, 2009;Salehi et al, 2014) or post-edited data (Kim et al, 2017;Wang et al, 2018;Ive et al, 2018) to train external confidence estimators, all our approach needs is the NMT model itself. Hence, it is easy to apply our approach to arbitrary NMT models trained for arbitrary language pairs.…”
Section: Introductionmentioning
confidence: 99%
“…• SRC → PE: trained first on the in-domain corpus provided, then fine-tuned on the shared task data. deepQUEST is the open source system developed by Ive et al (2018), UNQE is the unpublished system from Jiangxi Normal University, described by Specia et al (2018a), and QE Brain is the system from Alibaba described by Wang et al (2018). Reported numbers for the OpenKiwi system correspond to best models in the development set: the STACKED model for prediction of MT tags, and the ENSEMBLED model for the rest.…”
Section: Benchmark Experimentsmentioning
confidence: 99%
“…• Implementation of four QE systems: QUETCH (Kreutzer et al, 2015), NUQE (Martins et al, 2016, Predictor-Estimator Wang et al, 2018), and a stacked ensemble with a linear system (Martins et al, 2016(Martins et al, , 2017;…”
Section: Introductionmentioning
confidence: 99%
“…To implement the proposed model, the authors combined a neural based word prediction model with the translation QE models of word and sentence level. Later, the above neural MT model was replaced by Wang et al [20] with a modified version of self-attention-based transformer model [21] for estimating the English-German language-based sentence level translation quality estimation.…”
Section: Nivedita Bharti Nisheeth Joshi Iti Mathurmentioning
confidence: 99%