2014
DOI: 10.1186/s13636-014-0029-2
|View full text |Cite
|
Sign up to set email alerts
|

PLDA in the i-supervector space for text-independent speaker verification

Abstract: In this paper, we advocate the use of the uncompressed form of i-vector and depend on subspace modeling using probabilistic linear discriminant analysis (PLDA) in handling the speaker and session (or channel) variability. An i-vector is a low-dimensional vector containing both speaker and channel information acquired from a speech segment. When PLDA is used on an i-vector, dimension reduction is performed twice: first in the i-vector extraction process and second in the PLDA model. Keeping the full dimensional… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
20
0

Year Published

2014
2014
2022
2022

Publication Types

Select...
5
1

Relationship

2
4

Authors

Journals

citations
Cited by 25 publications
(20 citation statements)
references
References 29 publications
0
20
0
Order By: Relevance
“…Experiments were carried out on the core task (short2-short3) of NIST SRE08 [42]. We use two well-known metrics in evaluating the performance, namely, equal error rate (EER) and minimum detection cost (MinDCF).…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…Experiments were carried out on the core task (short2-short3) of NIST SRE08 [42]. We use two well-known metrics in evaluating the performance, namely, equal error rate (EER) and minimum detection cost (MinDCF).…”
Section: Methodsmentioning
confidence: 99%
“…However, whitening can never be possible for i-supervector due to data scarcity. To this end, we advocate the use of a Gaussianized version of rank norm [34,41]. The i-supervector is processed element-wise with warping functions mapping each dimension to a standard Gaussian distribution (instead of uniform distribution as in rank norm).…”
Section: I-supervector Pre-conditioningmentioning
confidence: 99%
See 1 more Smart Citation
“…The most detailed derivations are given in [16]. Our version was based on [14] and accelerated in a similar style as in [16]. The algorithm 1 summarizes it and the details are presented in the appendix A.…”
Section: Em-algorithmsmentioning
confidence: 99%
“…A number of solutions for this problem have been introduced. In [14], the authors utilize a special matrix structure of PLDA model and manually derive equations for the required matrix inversions. In [15], the authors proposed a special change of variables that lead to a diagonalized versions of the required matrices.…”
Section: Em-algorithmsmentioning
confidence: 99%