2005
DOI: 10.1109/tsmcc.2005.848166
|View full text |Cite
|
Sign up to set email alerts
|

On the Use of Different Speech Representations for Speaker Modeling

Abstract: Abstract-Numerous speech representations have been reported to be useful in speaker recognition. However, there is much less agreement on which speech representation provides a perfect representation of speaker-specific information conveyed in a speech signal. Unlike previous work, we propose an alternative approach to speaker modeling by the simultaneous use of different speech representations in an optimal way. Inspired by our previous empirical studies, we present a soft competition scheme on different spee… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
9
0
1

Year Published

2007
2007
2015
2015

Publication Types

Select...
5
2
2

Relationship

0
9

Authors

Journals

citations
Cited by 20 publications
(11 citation statements)
references
References 44 publications
1
9
0
1
Order By: Relevance
“…3, it is also observed that the semi-supervised learning system of a strong classifier merely yields an improvement only when the ICA representation is used but fails to work on other two representations. By comparison, we conclude that the simultaneous use of different representations results in robust learning and better generalization during semi-supervised learning, which is completely consistent with our previous argument in unsupervised [13] and supervised learning [14], [15].…”
Section: Facial Expression Recognitionsupporting
confidence: 90%
See 1 more Smart Citation
“…3, it is also observed that the semi-supervised learning system of a strong classifier merely yields an improvement only when the ICA representation is used but fails to work on other two representations. By comparison, we conclude that the simultaneous use of different representations results in robust learning and better generalization during semi-supervised learning, which is completely consistent with our previous argument in unsupervised [13] and supervised learning [14], [15].…”
Section: Facial Expression Recognitionsupporting
confidence: 90%
“…One is that the nature of the task makes different redundant representations 1 available, and the other is no different representations available. For the former case, our earlier studies show the usefulness of combining multiple classifiers trained on different representations [13]- [15]. For the latter case, we can use the bootstrap re-sampling techniques [16] to create different data sets and then train an ensemble of classifiers on them.…”
Section: B Classifier Ensemble Generationmentioning
confidence: 99%
“…İkinci bir yöntem, farklı özniteliklerin farklı sınıflayıcılarla beraber kullanılarak işlenmesi yöntemidir [44,45]. Yaygın olarak kullanılmakta olan bu ikinci yolda önemli olan, en uygun öznitelik-sınıflayıcı eşleşmelerini yapabilmektir.…”
Section: Gi̇ri̇ş (Introduction)unclassified
“…Unlike the previous method [20], our approach is motivated by our previous success in the use of different representations to construct an ensemble model for dealing with difficult supervised [22]- [24] and semisupervised learning tasks [25], where the use of different representations better exploits the information conveyed in the raw data and therefore leads to the better performance. For each individual representation, we first employ an RPCL network for clustering analysis of automatic model selection, and the nature of an RPCL network often leads to quick clustering analysis.…”
Section: Introductionmentioning
confidence: 99%