2019 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC) 2019
DOI: 10.1109/apsipaasc47483.2019.9023321
|View full text |Cite
|
Sign up to set email alerts
|

AP19-OLR Challenge: Three Tasks and Their Baselines

Abstract: This paper introduces the fourth oriental language recognition (OLR) challenge AP19-OLR, including the data profile, the tasks and the evaluation principles. The OLR challenge has been held successfully for three consecutive years, along with APSIPA Annual Summit and Conference (APSIPA ASC). The challenge this year still focuses on practical and challenging tasks, precisely (1) short-utterance LID, (2) cross-channel LID and (3) zero-resource LID.The event this year includes more languages and more real-life da… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
22
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
3
3

Relationship

0
6

Authors

Journals

citations
Cited by 19 publications
(22 citation statements)
references
References 13 publications
0
22
0
Order By: Relevance
“…6], with parameters CMiss = CFA = 1 and PTarget = 0.5, on the predicted language scores. Results From Table 4, we compare the Cavg results of models 1, 2, and 3 to the respective baseline results of AP19-OLR (0.126) [12], MGB-3 (0.218) [27], and DoSL (0.013) [19], and note that our results are within 0.1, 1.8, and 2.4 percentage points. We note that model 1 outperforms other models on AP19-OLR and DoSL, while model 2 is best on its reference dataset MGB-3 and the best overall model on average.…”
Section: End-to-end Experimentsmentioning
confidence: 93%
See 3 more Smart Citations
“…6], with parameters CMiss = CFA = 1 and PTarget = 0.5, on the predicted language scores. Results From Table 4, we compare the Cavg results of models 1, 2, and 3 to the respective baseline results of AP19-OLR (0.126) [12], MGB-3 (0.218) [27], and DoSL (0.013) [19], and note that our results are within 0.1, 1.8, and 2.4 percentage points. We note that model 1 outperforms other models on AP19-OLR and DoSL, while model 2 is best on its reference dataset MGB-3 and the best overall model on average.…”
Section: End-to-end Experimentsmentioning
confidence: 93%
“…We did not have access to the NIST LRE datasets. AP19-OLR Oriental Language Recognition challenge 2019 (AP19-OLR), contains speech in 10 languages mainly spoken in Asia and one out-of-set (OOS) mixture of European languages [12]. The dataset includes 261 hours of training data and 5 hours of test data.…”
Section: Datasetsmentioning
confidence: 99%
See 2 more Smart Citations
“…EXPERIMENTS3.1. Experimental conditionsExperiments on SLID task are carried out based on data corpus from Oriental Language Recognition (OLR) 2020 Challenge[20,21, 22]. In training, there are around 110 k utterances ( more than 100 hours), from 10 languages.…”
mentioning
confidence: 99%