Proceedings of the Third Conference on Machine Translation: Shared Task Papers 2018
DOI: 10.18653/v1/w18-6440
|View full text |Cite
|
Sign up to set email alerts
|

The AFRL-Ohio State WMT18 Multimodal System: Combining Visual with Traditional

Abstract: AFRL-Ohio State extends its usage of visual domain-driven machine translation for use as a peer with traditional machine translation systems. As a peer, it is enveloped into a system combination of neural and statistical MT systems to present a composite translation.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2018
2018
2020
2020

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(3 citation statements)
references
References 11 publications
0
3
0
Order By: Relevance
“…The CUNI submissions use two architectures based on the self-attentive Transformer model (Vaswani et al, 2017). For German and Czech, a language model is used to extract pseudo-in-ID Participating team AFRL-OHIOSTATE Air Force Research Laboratory & Ohio State University (Gwinnup et al, 2018) CUNI Univerzita Karlova v Praze (Helcl et al, 2018) LIUMCVC Laboratoire d'Informatique de l'Université du Maine & Universitat Autonoma de Barcelona Computer Vision Center (Caglayan et al, 2018) MeMAD Aalto University, Helsinki University & EURECOM (Grönroos et al, 2018) OSU-BAIDU Oregon State University & Baidu Research (Zheng et al, 2018) SHEF University of Sheffield UMONS Université de Mons (Delbrouck and Dupont, 2018) Table 5: Participants in the WMT18 multimodal machine translation shared task.…”
Section: Cuni (Task 1)mentioning
confidence: 99%
“…The CUNI submissions use two architectures based on the self-attentive Transformer model (Vaswani et al, 2017). For German and Czech, a language model is used to extract pseudo-in-ID Participating team AFRL-OHIOSTATE Air Force Research Laboratory & Ohio State University (Gwinnup et al, 2018) CUNI Univerzita Karlova v Praze (Helcl et al, 2018) LIUMCVC Laboratoire d'Informatique de l'Université du Maine & Universitat Autonoma de Barcelona Computer Vision Center (Caglayan et al, 2018) MeMAD Aalto University, Helsinki University & EURECOM (Grönroos et al, 2018) OSU-BAIDU Oregon State University & Baidu Research (Zheng et al, 2018) SHEF University of Sheffield UMONS Université de Mons (Delbrouck and Dupont, 2018) Table 5: Participants in the WMT18 multimodal machine translation shared task.…”
Section: Cuni (Task 1)mentioning
confidence: 99%
“…An n-best list from their SMT is reranked using a bi-directional NMT trained on the aforementioned source/target word sequences. Finally, Duselis et al (2017) and Gwinnup et al (2018) propose a pure retrieval system without any reranking involved. For a given image, they first obtain a set of candidate captions from a pretrained image captioning system.…”
Section: Reranking and Retrieval Based Approachesmentioning
confidence: 99%
“…An n-best list from their SMT is reranked using a bidirectional NMT trained on the aforementioned source/target word sequences. Finally, Duselis et al (2017) and Gwinnup et al (2018) propose a pure retrieval system without any reranking involved. For a given image, they first obtain a set of candidate captions from a pretrained image captioning system.…”
Section: Reranking and Retrieval Based Approachesmentioning
confidence: 99%