2009
DOI: 10.1007/s12046-009-0042-9
|View full text |Cite
|
Sign up to set email alerts
|

An experimental comparison of modelling techniques for speaker recognition under limited data condition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
11
0

Year Published

2013
2013
2020
2020

Publication Types

Select...
9
1

Relationship

0
10

Authors

Journals

citations
Cited by 13 publications
(11 citation statements)
references
References 15 publications
0
11
0
Order By: Relevance
“…It can be implied from that study that all popular speaker recognition techniques, normally trained using substantial speech data, will suffer from similar performance degradation when trained and tested by using limited speech data. Thus, the proposed system presented in this paper shows better limited data condition performance than all the traditional methods described by Jayanna and Prasanna (2009).…”
Section: Resultsmentioning
confidence: 80%
“…It can be implied from that study that all popular speaker recognition techniques, normally trained using substantial speech data, will suffer from similar performance degradation when trained and tested by using limited speech data. Thus, the proposed system presented in this paper shows better limited data condition performance than all the traditional methods described by Jayanna and Prasanna (2009).…”
Section: Resultsmentioning
confidence: 80%
“…A limited amount of speech data (20-30 msec for training and 8-15 msec for testing) is used for evaluation. Kmeans clustering algorithm based on iterative refinement approach is used for formation of speaker model, which is effective specifically for limited data condition [20]. The system performance is measured in terms of Percentage Identification Accuracy (PIA) given as, the ratio of number of correctly classified test sequences to the total number of test sequences calculated in percentage.…”
Section: Resultsmentioning
confidence: 99%
“…As we know, the number of feature vectors should be not less than ten times the number of nonoverlapping clusters (Jayanna and Mahadeva, 2009). …”
Section: Discussionmentioning
confidence: 99%