2019
DOI: 10.1007/s12652-019-01325-y
|View full text |Cite
|
Sign up to set email alerts
|

A comparative analysis of pooling strategies for convolutional neural network based Hindi ASR

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 22 publications
(7 citation statements)
references
References 53 publications
0
7
0
Order By: Relevance
“…. , R } is termed as a pooling shift which allows enabling overlap within concerning pooling regions when N < R. The biggest limitation of max pooling is that all the other items are ignored completely whereas only the biggest element from the pooling region is assessed [23]. If most of the components in the pooling region are of large magnitude, the distinguishing traits vanish once the max pooling process has been completed.…”
Section: Max Pooling Methodsmentioning
confidence: 99%
“…. , R } is termed as a pooling shift which allows enabling overlap within concerning pooling regions when N < R. The biggest limitation of max pooling is that all the other items are ignored completely whereas only the biggest element from the pooling region is assessed [23]. If most of the components in the pooling region are of large magnitude, the distinguishing traits vanish once the max pooling process has been completed.…”
Section: Max Pooling Methodsmentioning
confidence: 99%
“…Authors found that MFCC is effective for end to end acoustic modelling using CNN [52], [53]. MFCC is widely used feature while considering speech modelling [54], [55]. MFCC based comparative study of speech recognition techniques was conducted by authors who found that MFCC with HMM gave recognition accuracy of 85 percent and with deep neural networks the score was 82.2 percent [56].…”
Section: аналіз мовленнєвих особливостей шкали мел з використанням електрографічних і мовних сигналів параметризованим методом Knn і Xgbomentioning
confidence: 99%
“…With recent advances in deep learning showing highly impressive misclassification rate for image or audio-based processing [29], we aim to further push the lower bound of misprediction rates by applying deep learning algorithms to branch prediction. Deep learning [23] is a set of algorithms to train and utilize multi-layer neural networks.…”
Section: B Dnn and Dnn-based Predictor Modelsmentioning
confidence: 99%
“…Locally connected networks with the longest history bits [22], such as convolution neural networks (CNNs) [32], [33], are likely to be a better choice for branch prediction as most branches show high correlation with nearby history. CNNs [29], [34]- [38] are deep feedforward neural networks. The vital components of a CNN architecture are convolution layers and pooling layers, as shown in Fig.4.…”
Section: ) Cnn and Cnn-based Predictor Modelmentioning
confidence: 99%