Proceedings of the 20th International Conference on Spoken Language Translation (IWSLT 2023) 2023
DOI: 10.18653/v1/2023.iwslt-1.11
|View full text |Cite
|
Sign up to set email alerts
|

Direct Models for Simultaneous Translation and Automatic Subtitling: FBK@IWSLT2023

Abstract: This paper describes the FBK's participation in the Simultaneous Translation and Automatic Subtitling tracks of the IWSLT 2023 Evaluation Campaign. Our submission focused on the use of direct architectures to perform both tasks: for the simultaneous one, we leveraged the knowledge already acquired by offline-trained models and directly applied a policy to obtain the real-time inference; for the subtitling one, we adapted the direct ST model to produce well-formed subtitles and exploited the same architecture t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(3 citation statements)
references
References 38 publications
(53 reference statements)
0
3
0
Order By: Relevance
“…Similarly to previous years (Gaido et al, 2022;Papi et al, 2023a), we participated in the Simultaneous Translation evaluation campaign, focusing on the speech-to-text translation sub-track. For this year's submission, we opted for the use of the new Seam-lessM4T model, which is allowed for the task, 2 as the underlying model of the SimulST policy Alig-nAtt.…”
Section: Simulseamlessmentioning
confidence: 99%
See 2 more Smart Citations
“…Similarly to previous years (Gaido et al, 2022;Papi et al, 2023a), we participated in the Simultaneous Translation evaluation campaign, focusing on the speech-to-text translation sub-track. For this year's submission, we opted for the use of the new Seam-lessM4T model, which is allowed for the task, 2 as the underlying model of the SimulST policy Alig-nAtt.…”
Section: Simulseamlessmentioning
confidence: 99%
“…In Table 1, we report the scores for the final submission for each language pair, including LAAL and ATD latency metrics and their corresponding computationally aware scores. SimulSeamless is compared with all the participants of last year: CMU (Yan et al, 2023), CUNI-KIT (Polák et al, 2023), FBK (Papi et al, 2023a), HW-TSC (Guo et al, 2023), NAIST (Fukuda et al, 2023), and XIAOMI (Huang et al, 2023). Comparisons are not reported for cs-en since it is a new language direction for the task.…”
Section: Comparison With Last Year's Participantsmentioning
confidence: 99%
See 1 more Smart Citation