2016 IEEE Spoken Language Technology Workshop (SLT) 2016
DOI: 10.1109/slt.2016.7846290
|View full text |Cite
|
Sign up to set email alerts
|

Quaternion Neural Networks for Spoken Language Understanding

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
51
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
4
3
2

Relationship

3
6

Authors

Journals

citations
Cited by 32 publications
(51 citation statements)
references
References 8 publications
0
51
0
Order By: Relevance
“…where α is a quaternion split activation function [8,6,9,17]. The output layer of a quaternion neural network is commonly either quaternion-valued such as for quaternion approximation [7], or real-valued to obtains a posterior distribution based on a softmax function following the split approach.…”
Section: Quaternion Convolutional Encoder-decodermentioning
confidence: 99%
“…where α is a quaternion split activation function [8,6,9,17]. The output layer of a quaternion neural network is commonly either quaternion-valued such as for quaternion approximation [7], or real-valued to obtains a posterior distribution based on a softmax function following the split approach.…”
Section: Quaternion Convolutional Encoder-decodermentioning
confidence: 99%
“…with σ and α the sigmoid and tanh quaternion split activations [18,11,19,10]. The quaternion weight and bias matrices are initialized following the proposal of [15].…”
Section: Quaternion Long-short Term Memory Neural Networkmentioning
confidence: 99%
“…Moreover, it performs an interpolation between two rotations following a geodesic over a sphere in the R 3 space. 1 A multilayer perceptron (MLP) with more than one hidden layer Given a segmentation S = {s 1 , s 2 , s 3 , s 4 } of a document p ∈ P depending on the document segmentation detailed in [12] and a set of topics from a latent Dirichlet allocation (LDA) [23] z = {z 1 , . .…”
Section: Quaternion Algebramentioning
confidence: 99%
“…In other words, real-valued representations reveal little in way of document internal structure by only considering words or topics contained in the document as an isolate basic element. Therefore, quaternion multi-layer perceptrons (QMLP) [11,12,13] and quaternion autoencoders (QAE) [14] have been introduced to capture such latent dependencies, thanks to the fourth dimensionality of hyper-complex numbers alongside to the Hamilton product [15]. Nonetheless, previous quaternion-based studies focused on three-layered neural networks, while the efficiency and the effectiveness of DNN have already been demonstrated [16,5].…”
Section: Introductionmentioning
confidence: 99%