Proceedings of the 2nd Workshop on Neural Machine Translation and Generation 2018
DOI: 10.18653/v1/w18-2701
|View full text |Cite
|
Sign up to set email alerts
|

Findings of the Second Workshop on Neural Machine Translation and Generation

Abstract: This document describes the findings of the Second Workshop on Neural Machine Translation and Generation, held in concert with the annual conference of the Association for Computational Linguistics (ACL 2018). First, we summarize the research trends of papers presented in the proceedings, and note that there is particular interest in linguistic structure, domain adaptation, data augmentation, handling inadequate resources, and analysis of models. Second, we describe the results of the workshop's shared task on… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
9
0
1

Year Published

2018
2018
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 11 publications
(10 citation statements)
references
References 30 publications
0
9
0
1
Order By: Relevance
“…The trade-off between model performance and computational efficiency has been explored in multiple shared tasks and competitions. The series of Efficient Neural Machine Translation challenges [1,4,6] measured machine translation inference performance on CPUs and GPUs with standardized training data and hardware. The performance was evaluated by the BLUE score, while the computational efficiency was measured by multiple parameters, including real-time, which the model used to translate the private test set, peak RAM and GPU RAM consumption, size of the model on disk, and the total size of the docker image, which could have included rule-based and hard-coded approaches.…”
Section: Related Workmentioning
confidence: 99%
“…The trade-off between model performance and computational efficiency has been explored in multiple shared tasks and competitions. The series of Efficient Neural Machine Translation challenges [1,4,6] measured machine translation inference performance on CPUs and GPUs with standardized training data and hardware. The performance was evaluated by the BLUE score, while the computational efficiency was measured by multiple parameters, including real-time, which the model used to translate the private test set, peak RAM and GPU RAM consumption, size of the model on disk, and the total size of the docker image, which could have included rule-based and hard-coded approaches.…”
Section: Related Workmentioning
confidence: 99%
“…As neural machine translation becomes more widely deployed in production environments, it becomes also increasingly important to serve translations models in a way as fast and as memory-efficient as possible, both on dedicated GPU and on standard CPU hardwares. The WN-MT 2018 shared task 1 focused on comparing different systems on both accuracy and computational efficiency (Birch et al, 2018).…”
Section: Introductionmentioning
confidence: 99%
“…In this study, we introduce the NICT neural translation system at the Second Workshop on Neural Machine Translation and Generation (NMT-2018) (Birch et al, 2018). A characteristic of the system is that translation qualities are improved by introducing self-training, using open-source neural translation systems and defined training data.…”
Section: Introductionmentioning
confidence: 99%
“…As neural machine translation becomes more widely deployed in production environments, it becomes also increasingly important to serve translations models in a way as fast and as memory-efficient as possible, both on dedicated GPU and on standard CPU hardwares. The WN-MT 2018 shared task 1 focused on comparing different systems on both accuracy and computational efficiency (Birch et al, 2018). This paper describes the entry for the OpenN-MT system to this competition.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation