2012
DOI: 10.14569/ijacsa.2012.030407
|View full text |Cite
|
Sign up to set email alerts
|

Multi-input Multi-output Beta Wavelet Network: Modeling of Acoustic Units for Speech Recognition

Abstract: Abstract-In this paper, we propose a novel architecture of wavelet network called Multi-input Multi-output Wavelet Network MIMOWN as a generalization of the old architecture of wavelet network. This newel prototype was applied to speech recognition application especially to model acoustic unit of speech. The originality of our work is the proposal of MIMOWN to model acoustic unit of speech. This approach was proposed to overcome limitation of old wavelet network model. The use of the multi-input multi-output a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
5
0

Year Published

2014
2014
2023
2023

Publication Types

Select...
5
4
1

Relationship

3
7

Authors

Journals

citations
Cited by 18 publications
(5 citation statements)
references
References 22 publications
0
5
0
Order By: Relevance
“…Since Beta wavelet [1] is a powerful tool in various domains such as image compression [2], face recognition [3,4], 3D face recognition [5], image classification [6,7], phoneme recognition [8], speech recognition [9] and in particular Arabic word recognition [10] and hand tracking and recognition [11]; this study used the Fast Beta Wavelet Network (FBWN) modeling to propose a new approach for CBIR.…”
Section: Introductionmentioning
confidence: 99%
“…Since Beta wavelet [1] is a powerful tool in various domains such as image compression [2], face recognition [3,4], 3D face recognition [5], image classification [6,7], phoneme recognition [8], speech recognition [9] and in particular Arabic word recognition [10] and hand tracking and recognition [11]; this study used the Fast Beta Wavelet Network (FBWN) modeling to propose a new approach for CBIR.…”
Section: Introductionmentioning
confidence: 99%
“…Another improvement we are making to the model is to include a batch normalization [26], [27] layer to make it more reliable and efficient [6] [28]. Loffe and Szegedy introduce the batch normalization concept in 2015.…”
Section: Batch Normalizationmentioning
confidence: 99%
“…The divergence is described using a cost function. The cost function of an SAE consists of the following three terms (Hassairi et al, 2015;Ejbali et al, 2012):…”
Section: The Basic Sparse Auto-encodermentioning
confidence: 99%