Proceedings of the 17th International Conference on Spoken Language Translation 2020
DOI: 10.18653/v1/2020.iwslt-1.2
|View full text |Cite
|
Sign up to set email alerts
|

ON-TRAC Consortium for End-to-End and Simultaneous Speech Translation Challenge Tasks at IWSLT 2020

Abstract: This paper describes the ON-TRAC Consortium translation systems developed for two challenge tracks featured in the Evaluation Campaign of IWSLT 2020, offline speech translation and simultaneous speech translation. ON-TRAC Consortium is composed of researchers from three French academic laboratories: LIA

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
20
0

Year Published

2020
2020
2021
2021

Publication Types

Select...
7

Relationship

2
5

Authors

Journals

citations
Cited by 14 publications
(20 citation statements)
references
References 24 publications
(22 reference statements)
0
20
0
Order By: Relevance
“…Teams followed the suggestion to submit multiple systems per regime, which resulted in a total of 56 systems overall. ON-TRAC (Elbayad et al, 2020) participated in both the speech and text tracks. The authors used a hybrid pipeline for simultaneous speech (Ma et al, 2019).…”
Section: Submissionsmentioning
confidence: 99%
See 1 more Smart Citation
“…Teams followed the suggestion to submit multiple systems per regime, which resulted in a total of 56 systems overall. ON-TRAC (Elbayad et al, 2020) participated in both the speech and text tracks. The authors used a hybrid pipeline for simultaneous speech (Ma et al, 2019).…”
Section: Submissionsmentioning
confidence: 99%
“…ON-TRAC (Elbayad et al, 2020) participated with end-to-end systems, focusing on speech segmentation, data augmentation and the ensembling of multiple models. They experimented with several attention-based encoder-decoder models sharing the general backbone architecture described in , which comprises an encoder with two VGG-like (Simonyan and Zisserman, 2015) CNN blocks followed by five stacked BLSTM layers.…”
Section: Submissionsmentioning
confidence: 99%
“…EN→DE Task The performances of text-to-text EN→DE task is shown in Figure 4(a). We can see that the performance of proposed CAAT is always better than that of wait-k with SBS and the best results from ON-TRAC in 2020 (Elbayad et al, 2020), especially in low latency regime, and the performance of CAAT with model ensemble is nearly equivalent to offline result. Moreover, it can be further noticed from Figure 4(a) that the model ensemble can also improve the BLUE score more or less under different latency regimes, and the increase is quite obvious in low latency regime.…”
Section: Text-to-text Simultaneous Translationmentioning
confidence: 84%
“…Besides, SpecAugment [19] is used to train our EN-DE char model as well. Further details can be found in [10,5].…”
Section: Evaluation Of Simultaneous Speech Translationmentioning
confidence: 99%