2016
DOI: 10.1007/s11042-016-4071-1
|View full text |Cite
|
Sign up to set email alerts
|

Frame level sparse representation classification for speaker verification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 6 publications
(1 citation statement)
references
References 18 publications
0
1
0
Order By: Relevance
“…Recently, the sparse representation of the speaker acoustic features, such as i-vector features [13], the GMM-UBM features [14], tensor features [15], the MFCCs [16] and the Gaussian mixture model mean supervectors [17], was introduced for the speaker recognition with synthesis sparse representation models. In these models, a signal x ∈ R M ×1 is represented as a linear combination of a few atoms from an overcomplete dictionary D ∈ R M ×Q (Q > M ), i.e., x = Da, where a ∈ R Q×1 is the sparse coefficient, i.e., a 0 = L Q, the 0 quasi-norm • 0 counts the number of nonzero components in its argument.…”
Section: Introductionmentioning
confidence: 99%
“…Recently, the sparse representation of the speaker acoustic features, such as i-vector features [13], the GMM-UBM features [14], tensor features [15], the MFCCs [16] and the Gaussian mixture model mean supervectors [17], was introduced for the speaker recognition with synthesis sparse representation models. In these models, a signal x ∈ R M ×1 is represented as a linear combination of a few atoms from an overcomplete dictionary D ∈ R M ×Q (Q > M ), i.e., x = Da, where a ∈ R Q×1 is the sparse coefficient, i.e., a 0 = L Q, the 0 quasi-norm • 0 counts the number of nonzero components in its argument.…”
Section: Introductionmentioning
confidence: 99%