2022 International Joint Conference on Neural Networks (IJCNN) 2022
DOI: 10.1109/ijcnn55064.2022.9892959
|View full text |Cite
|
Sign up to set email alerts
|

Nested compression of convolutional neural networks with Tucker-2 decomposition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 12 publications
0
2
0
Order By: Relevance
“…There is also a possibility to use the weights of the pre-trained network and factorize it into the values of smaller tensors using, for example, higher-order singular-value decomposition (HOSVD) [ 83 ]. In [ 84 ], the above-mentioned techniques were applied to well-known architectures (ResNet [ 85 ], VGG [ 86 ]) for image classification. These architectures contain both convolutional and fully connected layers.…”
Section: Techniques For the Reduction In Computational And Memory Req...mentioning
confidence: 99%
See 1 more Smart Citation
“…There is also a possibility to use the weights of the pre-trained network and factorize it into the values of smaller tensors using, for example, higher-order singular-value decomposition (HOSVD) [ 83 ]. In [ 84 ], the above-mentioned techniques were applied to well-known architectures (ResNet [ 85 ], VGG [ 86 ]) for image classification. These architectures contain both convolutional and fully connected layers.…”
Section: Techniques For the Reduction In Computational And Memory Req...mentioning
confidence: 99%
“…These architectures contain both convolutional and fully connected layers. In the case of fully connected layers, truncated singular-value decomposition was used to factorize their weight matrices while tensors representing convolutional layers were factorized using Tucker-2 [ 87 ] and nested Tucker-2 decompositions [ 84 ]. The results obtained for the image classification task for MNIST [ 88 ] and CIFAR-10 [ 89 ] showed a small decrease in accuracy while reducing both the memory requirement and computational complexity.…”
Section: Techniques For the Reduction In Computational And Memory Req...mentioning
confidence: 99%