2018
DOI: 10.1007/978-3-030-01249-6_45
|View full text |Cite
|
Sign up to set email alerts
|

Constrained Optimization Based Low-Rank Approximation of Deep Neural Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
31
0
1

Year Published

2020
2020
2024
2024

Publication Types

Select...
6
3

Relationship

0
9

Authors

Journals

citations
Cited by 45 publications
(32 citation statements)
references
References 14 publications
0
31
0
1
Order By: Relevance
“…Cosine Transform) 和由一维 DCT 变换和一维小波构 造的小波系统。Denton 等人 [124] 提出了卷积核的低秩 近似和聚类方法,实现单个卷积层 2 倍加速。Li 等人 [125] 提出用于寻找训练 CNN 的最优低秩近似,即基于 约束优化的低秩近似, 从而限制累加操作和内存占比。 除此之外,仍然有很多基于低秩分解的加速方法 [29,30,126,127,128] , 虽然低秩分解对于模型加速和压缩简单 有效,但其实现由于涉及分解操作,不仅带来额外的 计算压力, 同时还需要大量的模型重训练以实现收敛。 并且不适用不包含卷积操作的神经网络,本文中不针 对低秩分解方法进行特定分析。 2.5 神经网络架构搜索 目前的压缩和加速方法主要针对特定的神经网 络结构,但当我们考虑它们的组合时,可以探索的架 构策略就会极具增加。神经网络架构搜索 [31,32] 就是为 了对定义的神经网络不同组件的一组决策的搜索。是 一种系统化、 自动化的学习最佳神经网络架构的方法。 根据 Elsken 等人 [129]…”
Section: 低秩分解利用矩阵或张量分解来估计神经网络 中卷积层的权重参数。 卷积核可以看作是一个三维张 量, 而低秩分解则是通过张量分解的思想来消除三维 张量中存在的冗余。 低秩分解来解决卷积加速问题已 经有了很多研究,例如最早的高维 Dct (Discreteunclassified
“…Cosine Transform) 和由一维 DCT 变换和一维小波构 造的小波系统。Denton 等人 [124] 提出了卷积核的低秩 近似和聚类方法,实现单个卷积层 2 倍加速。Li 等人 [125] 提出用于寻找训练 CNN 的最优低秩近似,即基于 约束优化的低秩近似, 从而限制累加操作和内存占比。 除此之外,仍然有很多基于低秩分解的加速方法 [29,30,126,127,128] , 虽然低秩分解对于模型加速和压缩简单 有效,但其实现由于涉及分解操作,不仅带来额外的 计算压力, 同时还需要大量的模型重训练以实现收敛。 并且不适用不包含卷积操作的神经网络,本文中不针 对低秩分解方法进行特定分析。 2.5 神经网络架构搜索 目前的压缩和加速方法主要针对特定的神经网 络结构,但当我们考虑它们的组合时,可以探索的架 构策略就会极具增加。神经网络架构搜索 [31,32] 就是为 了对定义的神经网络不同组件的一组决策的搜索。是 一种系统化、 自动化的学习最佳神经网络架构的方法。 根据 Elsken 等人 [129]…”
Section: 低秩分解利用矩阵或张量分解来估计神经网络 中卷积层的权重参数。 卷积核可以看作是一个三维张 量, 而低秩分解则是通过张量分解的思想来消除三维 张量中存在的冗余。 低秩分解来解决卷积加速问题已 经有了很多研究,例如最早的高维 Dct (Discreteunclassified
“…In contrast, [4] achieves compression by combining channel wise low rank approximation and with separable one-dimensional spatial filters. In [7] multiply-accumulate operations are reduced via constrained optimization. Low rank factorization and pruning are combined in [8] by cascading the low rank projections of filters in the current layer to the next layer.…”
Section: Conventional Convolution Layer Basis Convolution Layermentioning
confidence: 99%
“…Two efficient low-rank decomposition schemes were proposed and used in the literature. The scheme 1 [3,5,[7][8][9] applies low-rank to an n×cd 2 sized reshape of the weight tensor W. The scheme 2 [1, Regular convolution Scheme 1…”
Section: Efficient Low-rank Decomposition Schemes Of Convolutional Layers Induced By Specific Matrix Reshapesmentioning
confidence: 99%
“…Out of many compression forms, recently, the low-rank compression re-emerged as an efficient way to achieve both compressed in size and reduced in FLOPs neural networks [1,2]. An ingredient that made low-rank more attractive for deep network compression is the research on an automatic way of setting the ranks of the decompositions [1][2][3], whereas previously, the ranks were fixed by hand [4,5] or heuristically estimated [6][7][8]. To further improve the performance of the low-rank compression, we address one overlooked ingredient -the choice of the weight reshaping when applying the decomposition.…”
Section: Introductionmentioning
confidence: 99%