2012
DOI: 10.1007/s11760-012-0299-z
|View full text |Cite
|
Sign up to set email alerts
|

Feature classification criterion for missing features mask estimation in robust speaker recognition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
5
0

Year Published

2016
2016
2017
2017

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(5 citation statements)
references
References 20 publications
0
5
0
Order By: Relevance
“…Since the model-based noise-compensation methods have a high computational burden, and they assume the prior knowledge of noise characteristics, they are not suitable for real-time usage [28], [34].…”
Section: Robust Feature Extractionmentioning
confidence: 99%
See 3 more Smart Citations
“…Since the model-based noise-compensation methods have a high computational burden, and they assume the prior knowledge of noise characteristics, they are not suitable for real-time usage [28], [34].…”
Section: Robust Feature Extractionmentioning
confidence: 99%
“…The score normalization techniques are utilized to increase the performance of the recognition systems against the noise, and they use imposter data-sets obtained in the same conditions of the target speaker [8], [34], [82À83]. The number of estimated parameters is kept as equal to the number of static feature coefficients in FW, STG, and FM, and they are required to operate in an offline condition [18].…”
Section: Robust Feature Extractionmentioning
confidence: 99%
See 2 more Smart Citations
“…Similar to the VAD, vowellike regions are used in [29] and improved in [30] by including the non-vowel-like regions. Missing data approach is also investigated in several studies [31][32][33][34], where a binary time-frequency mask is constructed for the noisy spectrum to indicate reliable and unreliable features. The unreliable features are then reconstructed, or marginalized (ignored in score computation).…”
Section: Introductionmentioning
confidence: 99%