2017
DOI: 10.48550/arxiv.1711.06494
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Improved Bayesian Compression

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
12
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
2
2
1

Relationship

0
5

Authors

Journals

citations
Cited by 8 publications
(12 citation statements)
references
References 0 publications
0
12
0
Order By: Relevance
“…Others, on reducing the cardinality of the networks parameters [27]- [32]. And some have proposed general compression frameworks with the objective to do both, sparsification and cardinality reduction [11], [21], [33]. Our entropy-constrained minimization objective (3) (and its continuous relaxation (6)) can be interpreted as a generalization of those works.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…Others, on reducing the cardinality of the networks parameters [27]- [32]. And some have proposed general compression frameworks with the objective to do both, sparsification and cardinality reduction [11], [21], [33]. Our entropy-constrained minimization objective (3) (and its continuous relaxation (6)) can be interpreted as a generalization of those works.…”
Section: Related Workmentioning
confidence: 99%
“…Although minimizing the variational lower bound is also well motivated from a MDL point of view [35], the resulting coding scheme is often impractical for real world scenarios. Therefore, [21], [33], [36] focused on designing suitable priors and posteriors that allow to apply practical coding schemes on the weight parameters after the variational lower bound has been minimized. This includes a final step where a lossless entropy coder is applied to the network's parameter as proposed by [11].…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Han et al (2015a) assembles pruning, quantization and Huffman coding to achieve better compression rate. Bayesian methods Ullrich et al (2017); Louizos et al (2017); Federici et al (2017) are also used to retrain the model such that the model has more space to be compressed. He et al (2018) uses reinforcement learning to design a compression algorithm.…”
Section: Related Work On Model Compressionmentioning
confidence: 99%
“…The other category directly compresses a given large neural network using pruning, quantization, and matrix factorization, including LeCun et al (1990); Hassibi and Stork (1993); Han et al (2015b,a); Cheng et al (2015). There are also advanced methods to train the neural network using Bayesian methods to help pruning or quantization at a later stage, such as Ullrich et al (2017); Louizos et al (2017); Federici et al (2017).…”
Section: Introductionmentioning
confidence: 99%