2018 IEEE International Conference on Big Data (Big Data) 2018
DOI: 10.1109/bigdata.2018.8622358
|View full text |Cite
|
Sign up to set email alerts
|

Versatile Communication Optimization for Deep Learning by Modularized Parameter Server

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 15 publications
0
2
0
Order By: Relevance
“…Among them, quantization (Han et al, 2015) is the most direct form of expression. (Wu et al, 2018) achieves compression through weight-sharing by k-means clustering on the weights. However, since the shared parameters need to be restored to their original locations, there are no runtime memory savings.…”
Section: Parameter Sharingmentioning
confidence: 99%
See 1 more Smart Citation
“…Among them, quantization (Han et al, 2015) is the most direct form of expression. (Wu et al, 2018) achieves compression through weight-sharing by k-means clustering on the weights. However, since the shared parameters need to be restored to their original locations, there are no runtime memory savings.…”
Section: Parameter Sharingmentioning
confidence: 99%
“…In related research, many compression methods for neural network have been proposed successively, including parameter pruning (Hu et al, 2016;Pan et al, 2020;Sui et al, 2021), parameter sharing (Wu et al, 2018), low-rank decomposition (Swaminathan et al, 2020) and knowledge distillation (Li et al, 2020;Prakosa et al, 2021), etc. But most of these methods have certain limitations, such as not being applicable to large neural network models, being applicable only to classification tasks, or some parameter settings depending on empirical and so on (Neill, 2020;Rongrong et al, 2018).…”
Section: Introductionmentioning
confidence: 99%