2019
DOI: 10.26881/bp.2019.4.03
|View full text |Cite
|
Sign up to set email alerts
|

Method of measuring the effort related to post-editing machine translated outputs produced in the English>Polish language pair by Google, Microsoft and DeepL MT engines: A pilot study

Abstract: This article presents the methodology and results of a pilot study concerning the impact of three popular and widely accessible machine translation engines (developed by Google, Microsoft and DeepL companies) on the pace of post-editing work and on the general effort related to post-editing of raw MT outputs. Fourteen volunteers were asked to translate and post-edit two source texts of similar characters and levels of complexity. The results of their work were collected and compared to develop a set of quantit… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
2
1
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(2 citation statements)
references
References 5 publications
0
2
0
Order By: Relevance
“…With its high translation quality, especially for European languages, DeepL performed all other competing tools. The quality of DeepL translations has been studied very frequently through different evaluation approaches (e.g., manually using Human Translation Edit Rate (HTER) or automatically with the likes of BLUE score, for instance) confirming the outperformance of this translation engine compared to other freely accessible ones (Kur, 2019 ; Bellés-Calvera and Quintana, 2021 ).…”
Section: Methodsmentioning
confidence: 99%
“…With its high translation quality, especially for European languages, DeepL performed all other competing tools. The quality of DeepL translations has been studied very frequently through different evaluation approaches (e.g., manually using Human Translation Edit Rate (HTER) or automatically with the likes of BLUE score, for instance) confirming the outperformance of this translation engine compared to other freely accessible ones (Kur, 2019 ; Bellés-Calvera and Quintana, 2021 ).…”
Section: Methodsmentioning
confidence: 99%
“…With its high translation quality, especially for European languages, DeepL performed all other competing tools. The quality of DeepL translations has been studied very frequently through different evaluation approaches (e.g., manually using Human Translation Edit Rate (HTER) or automatically with the likes of BLUE score, for instance) confirming the outperformance of this translation engine compared to other freely accessible ones (Kur, 2019;Bellés-Calvera and Quintana, 2021).…”
Section: Deepl Translatormentioning
confidence: 99%