Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics 2014
DOI: 10.3115/v1/e14-1002
|View full text |Cite
|
Sign up to set email alerts
|

Undirected Machine Translation with Discriminative Reinforcement Learning

Abstract: We present a novel Undirected Machine Translation model of Hierarchical MT that is not constrained to the standard bottomup inference order. Removing the ordering constraint makes it possible to condition on top-down structure and surrounding context. This allows the introduction of a new class of contextual features that are not constrained to condition only on the bottom-up context. The model builds translation-derivations efficiently in a greedy fashion. It is trained to learn to choose jointly the best act… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2014
2014
2014
2014

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 23 publications
0
3
0
Order By: Relevance
“…This idea inspired and is closely related to our potential BLEU, except that in our case, computing an admissible heuristic is too costly, so our potential BLEU is more like an average potential. Gesmundo and Henderson (2014) also consider the rankings between partial translation pairs as well. However, they evaluate a partial translation through extending it to a complete translation by re-decoding, and thus they need many passes of decoding for many partial translations, while ours only need one pass of decoding for all partial translations and thus is much more efficient.…”
Section: Related Workmentioning
confidence: 99%
“…This idea inspired and is closely related to our potential BLEU, except that in our case, computing an admissible heuristic is too costly, so our potential BLEU is more like an average potential. Gesmundo and Henderson (2014) also consider the rankings between partial translation pairs as well. However, they evaluate a partial translation through extending it to a complete translation by re-decoding, and thus they need many passes of decoding for many partial translations, while ours only need one pass of decoding for all partial translations and thus is much more efficient.…”
Section: Related Workmentioning
confidence: 99%
“…This is the essential observation behind so-called cube pruning and cube growing approaches (Gesmundo and Henderson, 2010; Huang and Chiang, 2005) for enumerating k best derivations.…”
Section: Methodsmentioning
confidence: 99%
“…Algorithm 1 formalizes this process. The process can be made even more efficient using the faster cube pruning approach introduced by Gesmundo and Henderson (2010).…”
Section: Methodsmentioning
confidence: 99%