2017 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU) 2017
DOI: 10.1109/asru.2017.8268978
|View full text |Cite
|
Sign up to set email alerts
|

Deep quaternion neural networks for spoken language understanding

Abstract: To cite this version:Titouan Parcollet, Mohamed Morchid, Georges Linarès. Deep quaternion neural networks for spoken language understanding. ABSTRACTDeep Neural Networks (DNN) received a great interest from researchers due to their capability to construct robust abstract representations of heterogeneous documents in a latent subspace. Nonetheless, mere real-valued deep neural networks require an appropriate adaptation, such as the convolution process, to capture latent relations between input features. Moreove… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
16
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 21 publications
(16 citation statements)
references
References 19 publications
0
16
0
Order By: Relevance
“…More recently neural networks of complex and hypercomplex numbers have received an increasing attention (Hirose et Yoshida, 2012;Tygert et al, 2016;Danihelka et al, 2016;Wisdom et al, 2016) and some efforts have shown promising results in different applications. In particular a deep quaternion network (Parcollet et al, 2016;Titouan et al, 2017;Parcollet et al, 2017), a deep quaternion convolutional network (Chase Gaudet, 2017;Titouan et al, 2018) and a quaternion recurrent neural network (Parcollet et al, 2018c) have been developed and employed by out team from the LIA for different challenging tasks such as images classification, compression and reconstruction or speech recognition.…”
Section: Quaternion-valued Neural Networkmentioning
confidence: 99%
See 2 more Smart Citations
“…More recently neural networks of complex and hypercomplex numbers have received an increasing attention (Hirose et Yoshida, 2012;Tygert et al, 2016;Danihelka et al, 2016;Wisdom et al, 2016) and some efforts have shown promising results in different applications. In particular a deep quaternion network (Parcollet et al, 2016;Titouan et al, 2017;Parcollet et al, 2017), a deep quaternion convolutional network (Chase Gaudet, 2017;Titouan et al, 2018) and a quaternion recurrent neural network (Parcollet et al, 2018c) have been developed and employed by out team from the LIA for different challenging tasks such as images classification, compression and reconstruction or speech recognition.…”
Section: Quaternion-valued Neural Networkmentioning
confidence: 99%
“…Consequently α = 50 T , with T the number of topics, and β = 0.01. The number T has been previously investigated for this task in (Parcollet et al, 2016(Parcollet et al, , 2017, and is set to 25. More precisely, 10 runs of the T = 25 LDA model are concatenated to obtain a final vector of size 25 × 10 = 250 to alleviate any variations.…”
Section: Abstract Document Representation With Ldamentioning
confidence: 99%
See 1 more Smart Citation
“…with σ and α the sigmoid and tanh quaternion split activations [18,11,19,10]. The quaternion weight and bias matrices are initialized following the proposal of [15].…”
Section: Quaternion Long-short Term Memory Neural Networkmentioning
confidence: 99%
“…Another fundamental property of quaternion-valued learning is the Hamilton product, which has recently favored the proliferation of convolutional neural networks in the quaternion domain [ 35 , 36 , 37 , 38 ]. Due to their capabilities, quaternion-valued learning methods have been applied in several applications, including spoken language understanding [ 39 ], color image processing [ 40 , 41 ], 3D audio [ 42 , 43 ], speech recognition [ 44 ], image generation [ 45 ], quantum mechanics [ 46 ], risk diversification [ 47 ], gait data analysis [ 48 ].…”
Section: Introductionmentioning
confidence: 99%