2021 IEEE International Conference on Image Processing (ICIP) 2021
DOI: 10.1109/icip42928.2021.9506128
|View full text |Cite
|
Sign up to set email alerts
|

Compressing Deep CNNs Using Basis Representation and Spectral Fine-Tuning

Abstract: We propose an efficient and straightforward method for compressing deep convolutional neural networks (CNNs) that uses basis filters to represent the convolutional layers, and optimizes the performance of the compressed network directly in the basis space. Specifically, any spatial convolution layer of the CNN can be replaced by two successive convolution layers: the first is a set of three-dimensional orthonormal basis filters, followed by a layer of one-dimensional filters that represents the original spatia… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(3 citation statements)
references
References 13 publications
0
3
0
Order By: Relevance
“…To understand how we do it, consider the convolution layer of a CNN, whose filters are of size M × N × D × D, where N and M are number of input channels and output channels respectively and the spatial dimension of the filters is D × D. These filters are flattened and arranged as columns of the matrix H ∈ R M ×A , where A = N D 2 . Following [5,7] we compute the singular value decomposition of H. This lets us write it as H = USV T where U = F and SV T = W.…”
Section: Training From Scratchmentioning
confidence: 99%
See 2 more Smart Citations
“…To understand how we do it, consider the convolution layer of a CNN, whose filters are of size M × N × D × D, where N and M are number of input channels and output channels respectively and the spatial dimension of the filters is D × D. These filters are flattened and arranged as columns of the matrix H ∈ R M ×A , where A = N D 2 . Following [5,7] we compute the singular value decomposition of H. This lets us write it as H = USV T where U = F and SV T = W.…”
Section: Training From Scratchmentioning
confidence: 99%
“…Low rank approximation compression methods [5,6,7,8] start with the assumption that the pretrained convolutions filters form a rank deficient matrix. They exploit this fact by representing the filters in a given convolution layer by the weighted linear combination of a set of basis filters [8,7].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation