2021 IEEE International Conference on Image Processing (ICIP) 2021
DOI: 10.1109/icip42928.2021.9506529
|View full text |Cite
|
Sign up to set email alerts
|

Removing Dimensional Restrictions On Complex/Hyper-Complex Neural Networks

Abstract: It has been shown that the core reasons that complex and hypercomplex valued neural networks offer improvements over their real-valued counterparts is the fact that aspects of their algebra forces treating multi-dimensional data as a single entity. However, both are constrained to a set number of dimensions, two for complex and four for quaternions. These observations motivate us to introduce novel vector map convolutions which capture this property, while dropping the unnatural dimensionality constraints thei… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
5
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
2
2

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(9 citation statements)
references
References 10 publications
0
5
0
Order By: Relevance
“…The multipliers shown in the figure are "[1, 2, 4, 1]," which gives 26 trainable layers. For the 50-layer version of the network uses the block multipliers " [3,4,6,3]." For our quaternion enhanced model, these layer counts do not include the quaternion modules.…”
Section: Methodsmentioning
confidence: 99%
See 3 more Smart Citations
“…The multipliers shown in the figure are "[1, 2, 4, 1]," which gives 26 trainable layers. For the 50-layer version of the network uses the block multipliers " [3,4,6,3]." For our quaternion enhanced model, these layer counts do not include the quaternion modules.…”
Section: Methodsmentioning
confidence: 99%
“…This experiment increases the layer count of the the standard convolution-based ResNet [5], the quaternionvalued convolution-based ResNet [2], the axial-ResNet [14] from 26 to 35 layers. This was done by using a block multiplier of [2,3,4,2] for these models. Although this did not give us exactly 33 layers, it preserved the bottleneck structure of the original design and enhances comparability.…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…Unfortunately, most common color image datasets contain RGB images and some tricks are required to process this data type with QNNs. Among them, the most used are padding a zero-channel to the input to encapsulate the image in the four quaternion components, or remodeling the QNN layer with the help of vector maps [31]. In addition, while quaternion neural operations are widespread and easy to be integrated in preexisting models, very few attempts have been made to extend models to different domain orders.…”
Section: Introductionmentioning
confidence: 99%