Proceedings of the Second Conference on Machine Translation 2017
DOI: 10.18653/v1/w17-4777
|View full text |Cite
|
Sign up to set email alerts
|

CUNI System for WMT17 Automatic Post-Editing Task

Abstract: Following upon the last year's CUNI system for automatic post-editing of machine translation output, we focus on exploiting the potential of sequence-to-sequence neural models for this task. In this system description paper, we compare several encoder-decoder architectures on a smaller-scale models and present the system we submitted to WMT 2017 Automatic Post-Editing shared task based on this preliminary comparison. We also show how simple inclusion of synthetic data can improve the overall performance as mea… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
23
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
6
3

Relationship

0
9

Authors

Journals

citations
Cited by 16 publications
(23 citation statements)
references
References 9 publications
0
23
0
Order By: Relevance
“…There are three components in our architecture: machine translation (MT), action generator (AG), and interpreter (LM). We compare our MT+AG+LM architecture against MT+AG 4 Berard et al 2017 Furthermore, we compare against monolingual SEQ2SEQ (TGT→PE) as well as the multisource SEQ2SEQ (SRC+TGT→PE) (Varis and Bojar, 2017). Monolingual SEQ2SEQ (TGT→PE) model is an attentional SEQ2SEQ model (Bahdanau et al, 2015) that takes target sentence as input and outputs desired PE sentence.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…There are three components in our architecture: machine translation (MT), action generator (AG), and interpreter (LM). We compare our MT+AG+LM architecture against MT+AG 4 Berard et al 2017 Furthermore, we compare against monolingual SEQ2SEQ (TGT→PE) as well as the multisource SEQ2SEQ (SRC+TGT→PE) (Varis and Bojar, 2017). Monolingual SEQ2SEQ (TGT→PE) model is an attentional SEQ2SEQ model (Bahdanau et al, 2015) that takes target sentence as input and outputs desired PE sentence.…”
Section: Methodsmentioning
confidence: 99%
“…Blindly performing edition over MT output, the monolingual APE has difficulty to correct missing word or information in the source sentence. Neural multi-source MT architectures are applied to better capture the connection between the source sentence/machine translated text and the PE output (Libovický et al, 2016;Varis and Bojar, 2017;Junczys-Dowmunt and Grundkiewicz, 2017). Chatterjee et al 2017 2017have proposed learning to predict the sequence of edit operations, aka the program, to produce the post-editing sentence (c.f.…”
Section: Introductionmentioning
confidence: 99%
“…built upon the two-encoder architecture of multi-source models (Libovický et al, 2016) by means of concatenating both weighted contexts of encoded src and mt. Varis and Bojar (2017) compared two multi-source models, one using a single encoder with concatenation of src and mt sentences, and a second one using two character-level encoders for mt and src along with a character-level decoder.…”
Section: Related Researchmentioning
confidence: 99%
“…Assuming that post-editing is reversible, (Pal et al, 2017) have proposed an attention mechanism over bidirectional models, mt→ pe and pe → mt. Several other researchers have proposed using multi-input seq2seq models for contextual APE (Bérard et al, 2017;Libovickỳ et al, 2016;Varis and Bojar, 2017;Pal et al, 2017;Libovickỳ and Helcl, 2017;. All these systems employ separate encoders for the two inputs, src and mt.…”
Section: Related Workmentioning
confidence: 99%