1997 IEEE International Conference on Acoustics, Speech, and Signal Processing
DOI: 10.1109/icassp.1997.596111
|View full text |Cite
|
Sign up to set email alerts
|

Phone classification with segmental features and a binary-pair partitioned neural network classifier

Abstract: This paper presents methods and experimental results for phonetic classification using 39 phone classes and the NIST recommended training and test sets for NTIMIT and TIMIT. Spectral/temporal features which represent the smoothed trajectory of FFT derived speech spectra over 300 ms intervals are used for the analysis. Classification tests are made with both a binary-pair partitioned (BPP) neural network system (one neural network for each of the 741 pairs of phones) and a single large neural network.Classifica… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
16
0

Publication Types

Select...
4
4
1

Relationship

2
7

Authors

Journals

citations
Cited by 34 publications
(16 citation statements)
references
References 9 publications
0
16
0
Order By: Relevance
“…These results show that SVMs perform significantly better than the Gaussian classifiers. Furthermore, the results are competitive with current state-of-the-art performance in phonetic classification using this data set [5,16]. It is also interesting to note that the choice of kernel function does not have a major impact on accuracy.…”
Section: Timit Experimentsmentioning
confidence: 57%
“…These results show that SVMs perform significantly better than the Gaussian classifiers. Furthermore, the results are competitive with current state-of-the-art performance in phonetic classification using this data set [5,16]. It is also interesting to note that the choice of kernel function does not have a major impact on accuracy.…”
Section: Timit Experimentsmentioning
confidence: 57%
“…Frequently used features are the spectrum [8], optimal filters [9], Mel Frequency Cepstral Coefficients (MFCC) [10]. The results show that MFCCs give high performance.…”
Section: Introductionmentioning
confidence: 84%
“…Several discriminative training strategies have also been suggested, including large margin training [2,3] and maximum mutual information training [4]. Apart from GMMs, other strategies used for classification include support vector machines [5], nearest neighbor strategies [6], hidden conditional random fields [7], linear regularized least squares [8] and neural networks [9]. In [3], GMMs are used in a hierarchical structure to yield state-of-the-art results for this task.…”
Section: Introductionmentioning
confidence: 99%