2019
DOI: 10.48550/arxiv.1911.01799
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

CN-CELEB: a challenging Chinese speaker recognition dataset

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2020
2020
2020
2020

Publication Types

Select...
3

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 17 publications
0
3
0
Order By: Relevance
“…Three datasets were used in our experiments: VoxCeleb [20,21], SITW [22] and CNCeleb [23]. VoxCeleb was used to train models (x-vector, PLDA, LDA, subspace DNF), while the other two were used for performance evaluation.…”
Section: Data and Settingmentioning
confidence: 99%
“…Three datasets were used in our experiments: VoxCeleb [20,21], SITW [22] and CNCeleb [23]. VoxCeleb was used to train models (x-vector, PLDA, LDA, subspace DNF), while the other two were used for performance evaluation.…”
Section: Data and Settingmentioning
confidence: 99%
“…Notable works include masking [1] and mapping [2] based approach, Speech Enhancement Generative Adversarial Network (SEGAN) [3], Deep Feature Loss (DFL) [4], end-to-end metric optimization [5], and Transformer based approach [6,7]. Meanwhile, an active research exists in the robustness of Speaker Verification (SV) systems [8,9,10,11]. Another reason for interest in speech enhancement arises from the notion that it is considered as a modern solution to improve noise robustness in SV systems [10,12,13].…”
Section: Introductionmentioning
confidence: 99%
“…one trained with (noisy) data augmentations is superior. Training data choice is important to us because we focus on BabyTrain and large "in the wild" public data releases such as SITW [19], VoxCeleb [23], and CN-Celeb [9] do not explicitly account for children speech.…”
Section: Introductionmentioning
confidence: 99%