2018
DOI: 10.1007/s12652-018-0828-x
|View full text |Cite
|
Sign up to set email alerts
|

GFCC based discriminatively trained noise robust continuous ASR system for Hindi language

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
11
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
5
4

Relationship

1
8

Authors

Journals

citations
Cited by 37 publications
(11 citation statements)
references
References 36 publications
0
11
0
Order By: Relevance
“…The performance is reported for various experimental approaches such as acoustic modelling context, modifying number of mixtures in the GMM, various optimized features with HMM, GMM-HMM, variations in the language models, using different discriminative learning techniques i.e., GFCC optimized by DE with MMI, MPE, with clean and noise corrupted speech for various types of noises of changing SNR levels. The DE optimized GFCC coupled with MPE discriminative learning techniques with triphone language models outperformed all other model combinations in clean and noise affected conditions [16].…”
Section: Literature Surveymentioning
confidence: 94%
“…The performance is reported for various experimental approaches such as acoustic modelling context, modifying number of mixtures in the GMM, various optimized features with HMM, GMM-HMM, variations in the language models, using different discriminative learning techniques i.e., GFCC optimized by DE with MMI, MPE, with clean and noise corrupted speech for various types of noises of changing SNR levels. The DE optimized GFCC coupled with MPE discriminative learning techniques with triphone language models outperformed all other model combinations in clean and noise affected conditions [16].…”
Section: Literature Surveymentioning
confidence: 94%
“…Table 6 lists all the extracted features along with their statistical measures of mean and standard deviation (STD) for method I (DWT) and method II (EMD). We extracted time domain [ 38 , 39 , 40 , 41 , 42 , 43 , 44 , 45 ], spectral [ 46 , 47 ], fractal and chaos [ 48 , 49 ], chroma [ 50 , 51 ], cepstral [ 52 ], and texture features [ 53 ] and analyzed them statistically.…”
Section: Methodsmentioning
confidence: 99%
“…In this study, we propose a method for monitoring the transformer status by classifying over-, normal-, and under-voltage levels based on the acoustic signal of the transformer operating in various noisy environments. In particular, we focus on the audible frequency band, which can be recognized by humans from the acoustic signal; that is, the acoustic signal is measured within the audible frequency, and the measured acoustic signal is converted into a Mel Spectrogram (MS) [ 21 , 22 , 23 ]. To design the classification model, we exploit the idea of the U-Net [ 24 ] encoder layers to extract and express the important features from the acoustic signal.…”
Section: Introductionmentioning
confidence: 99%
“…To analyze the acoustic signal, the Linear Prediction Cestrum Coefficient (LPCC) [ 21 ], Gammatone Filter Cepstral Coefficient (GFCC) [ 22 ], and Mel Frequency Cestrum Coefficient (MFCC) [ 23 ] are widely used in the time–frequency domain. LPCC can represent the acoustic signal as the frequency for a certain time without signal distortion.…”
Section: Introductionmentioning
confidence: 99%