2019
DOI: 10.1007/s10462-019-09752-1
|View full text |Cite
|
Sign up to set email alerts
|

A survey of quaternion neural networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
94
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
3
2
2

Relationship

0
7

Authors

Journals

citations
Cited by 146 publications
(94 citation statements)
references
References 65 publications
0
94
0
Order By: Relevance
“…The quaternion convolution is one of the main operations of deep neural networks in the quaternion domain [11,13]. Considering a generic quaternion input vector, q, defined similarly to (1), and a generic quaternion filter matrix defined as W = W a + W bî + W c + W dκ , the quaternion convolution can be expressed as the following Hamilton product:…”
Section: Network Architecturementioning
confidence: 99%
See 3 more Smart Citations
“…The quaternion convolution is one of the main operations of deep neural networks in the quaternion domain [11,13]. Considering a generic quaternion input vector, q, defined similarly to (1), and a generic quaternion filter matrix defined as W = W a + W bî + W c + W dκ , the quaternion convolution can be expressed as the following Hamilton product:…”
Section: Network Architecturementioning
confidence: 99%
“…The Hamilton product of (11) allows quaternion neural networks to capture internal latent relations within the features of a quaternion. Each quaternion convolutional layer is followed by a split quaternion Leaky-ReLU activation function [13]. Quaternion batch normalization is not taken into account as it may be considered as a source of randomness that could cause instability [5,6].…”
Section: Network Architecturementioning
confidence: 99%
See 2 more Smart Citations
“…This paper uses the best fit algorithm which has been tested many times by the author to get the results. The whole process of building algorithms in supervised models is discussed, but the author puts much emphasis on feature selection [6]. Many algorithms may over-fit the original data or neglect the period of natural causes.…”
Section: Introductionmentioning
confidence: 99%