Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Confere 2015
DOI: 10.3115/v1/p15-1001
|View full text |Cite
|
Sign up to set email alerts
|

On Using Very Large Target Vocabulary for Neural Machine Translation

Abstract: arXiv:1412

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

7
581
0
3

Year Published

2015
2015
2020
2020

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 644 publications
(591 citation statements)
references
References 15 publications
7
581
0
3
Order By: Relevance
“…Performance is measured with BLEU (Papineni et al, 2002), and statistical significance is computed with bootstrap resampling (Koehn, 2004). The result of the word-level baseline system is computed after post-processing its output following the approach of Jean et al (2015), which was customized to our scenario. This method (see §2) is driven by the attention model to replace the UNK tokens in the output with their corresponding recommendation supplied as external knowledge.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…Performance is measured with BLEU (Papineni et al, 2002), and statistical significance is computed with bootstrap resampling (Koehn, 2004). The result of the word-level baseline system is computed after post-processing its output following the approach of Jean et al (2015), which was customized to our scenario. This method (see §2) is driven by the attention model to replace the UNK tokens in the output with their corresponding recommendation supplied as external knowledge.…”
Section: Resultsmentioning
confidence: 99%
“…A different technique is postprocessing the translated sentences. Jean et al (2015) and Luong and Manning (2015) replace the unknown words either with the most likely aligned source word or with the translation determined by another word alignment model.…”
Section: Related Workmentioning
confidence: 99%
“…Kalchbrenner and Blunsom (2013) [5] used a standard RNN hidden unit for the decoder and a convolutional neural network for encoding the source sentence representation. However, at both the encoder and the decoder, Sutskever et al [9] all adopted a different version of the RNN with an LSTM-inspired hidden unit, the gated recur-rent unit (GRU), for both components. Bahdanau et al (2015) [8] successfully applied the attention mechanism into the NMT and proposed attention-based NMT to change the fixed vector c.…”
Section: A Rnn Encoder-decodermentioning
confidence: 99%
“…Luong et al [14] and Li et al [12] propose simple alignment-based technique that can replace outof-vocabulary words with similar words. Jean et al [7] use a large vocabulary with a method based on importance sampling.…”
Section: Related Workmentioning
confidence: 99%