Proceedings of the 20th International Conference on Spoken Language Translation (IWSLT 2023) 2023
DOI: 10.18653/v1/2023.iwslt-1.39
|View full text |Cite
|
Sign up to set email alerts
|

The Xiaomi AI Lab’s Speech Translation Systems for IWSLT 2023 Offline Task, Simultaneous Task and Speech-to-Speech Task

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 0 publications
0
2
0
Order By: Relevance
“…In Table 1, we report the scores for the final submission for each language pair, including LAAL and ATD latency metrics and their corresponding computationally aware scores. SimulSeamless is compared with all the participants of last year: CMU (Yan et al, 2023), CUNI-KIT (Polák et al, 2023), FBK (Papi et al, 2023a), HW-TSC (Guo et al, 2023), NAIST (Fukuda et al, 2023), and XIAOMI (Huang et al, 2023). Comparisons are not reported for cs-en since it is a new language direction for the task.…”
Section: Comparison With Last Year's Participantsmentioning
confidence: 99%
See 1 more Smart Citation
“…In Table 1, we report the scores for the final submission for each language pair, including LAAL and ATD latency metrics and their corresponding computationally aware scores. SimulSeamless is compared with all the participants of last year: CMU (Yan et al, 2023), CUNI-KIT (Polák et al, 2023), FBK (Papi et al, 2023a), HW-TSC (Guo et al, 2023), NAIST (Fukuda et al, 2023), and XIAOMI (Huang et al, 2023). Comparisons are not reported for cs-en since it is a new language direction for the task.…”
Section: Comparison With Last Year's Participantsmentioning
confidence: 99%
“…The increasing interest has led to numerous direct and cascade models participating in the challenge every year (Ansari et al, 2020;Anastasopoulos et al, 2021Anastasopoulos et al, , 2022Agarwal et al, 2023), all vying for the title of the best approach to realize a SimulST system from scratch. More recently, the practice of using models without ad-hoc training for the simultaneous scenario has become widespread (Polák et al, 2022;Gaido et al, 2022;Papi et al, 2023a;Polák et al, 2023;Yan et al, 2023;Huang et al, 2023), demonstrating that competitive or even superior results can be achieved compared to systems specifically tailored for SimulST (Papi et al, 2022a). Among the strategies used to repurpose standard (offline) ST models for SimulST (Liu et al, 2020;Papi et al, 2022aPapi et al, , 2023c, AlignAtt (Papi et al, 2023b) emerged as the best one, achieving new state-of-the-art results.…”
Section: Introductionmentioning
confidence: 99%