Proceedings of the Third Conference on Machine Translation: Shared Task Papers 2018
DOI: 10.18653/v1/w18-6464
|View full text |Cite
|
Sign up to set email alerts
|

UAlacant machine translation quality estimation at WMT 2018: a simple approach using phrase tables and feed-forward neural networks

Abstract: We describe the Universitat d'Alacant submissions to the word-and sentence-level machine translation (MT) quality estimation (QE) shared task at WMT 2018. Our approach to word-level MT QE builds on previous work to mark the words in the machine-translated sentence as OK or BAD, and is extended to determine if a word or sequence of words need to be inserted in the gap after each word. Our sentence-level submission simply uses the edit operations predicted by the word-level approach to approximate TER. The metho… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 18 publications
0
2
0
Order By: Relevance
“…Fluency is computed by taking the inverse of cross-entropy, according to an in-domain language model. Both measures are combined ID Participating team CMU-LTI Carnegie Melon University, US (Hu et al, 2018) JU-USAAR Jadavpur University, India & University of Saarland, Germany (Basu et al, 2018) MQE Vicomtech, Spain (Etchegoyhen et al, 2018) QEbrain Alibaba Group Inc, US RTM Referential Translation Machines, Turkey (Biçici, 2018) SHEF University of Sheffield, UK (Ive et al, 2018b) TSKQE University of Hamburg (Duma and Menzel, 2018) UAlacant University of Alacant, Spain (Sánchez-Martíínez et al, 2018) UNQE Jiangxi Normal University, China UTartu University of Tartu, Estonia (Yankovskaya et al, 2018) via simple arithmetic means on rescaled values, i.e., no machine learning is used. Since it is unsupervised, the method can only be meaningfully evaluated on the ranking task.…”
Section: Meq (T1)mentioning
confidence: 99%
“…Fluency is computed by taking the inverse of cross-entropy, according to an in-domain language model. Both measures are combined ID Participating team CMU-LTI Carnegie Melon University, US (Hu et al, 2018) JU-USAAR Jadavpur University, India & University of Saarland, Germany (Basu et al, 2018) MQE Vicomtech, Spain (Etchegoyhen et al, 2018) QEbrain Alibaba Group Inc, US RTM Referential Translation Machines, Turkey (Biçici, 2018) SHEF University of Sheffield, UK (Ive et al, 2018b) TSKQE University of Hamburg (Duma and Menzel, 2018) UAlacant University of Alacant, Spain (Sánchez-Martíínez et al, 2018) UNQE Jiangxi Normal University, China UTartu University of Tartu, Estonia (Yankovskaya et al, 2018) via simple arithmetic means on rescaled values, i.e., no machine learning is used. Since it is unsupervised, the method can only be meaningfully evaluated on the ranking task.…”
Section: Meq (T1)mentioning
confidence: 99%
“…Their BLEU score was 16.5 for Shona-English translation. As for English-Swahili, we benchmarked with [24] work. We used similar data set from [5].…”
Section: Discussionmentioning
confidence: 99%