2021
DOI: 10.1007/978-3-030-69544-6_41
|View full text |Cite
|
Sign up to set email alerts
|

FreezeNet: Full Performance by Reduced Storage Costs

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
43
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
6
2

Relationship

1
7

Authors

Journals

citations
Cited by 10 publications
(44 citation statements)
references
References 17 publications
1
43
0
Order By: Relevance
“…Another approach, which is similar to [93], proposed by Rosenfeld and Tsotsos [57], Wimmer et al [58], Sung et al [59], uses frozen weights on top of the trainable ones. The resulting transformation is given by…”
Section: Freezing Parametersmentioning
confidence: 99%
See 2 more Smart Citations
“…Another approach, which is similar to [93], proposed by Rosenfeld and Tsotsos [57], Wimmer et al [58], Sung et al [59], uses frozen weights on top of the trainable ones. The resulting transformation is given by…”
Section: Freezing Parametersmentioning
confidence: 99%
“…Freezing a DNN means that only parts of the network are trained, whereas the remaining ones are frozen at their initial/pre-trained values [55][56][57][58][59]. This leads to faster convergence of the networks [55] and reduced communication costs for distributed training [59].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…To reduce the theoretical training FLOPs, a new trend of exploring sparsity at an early stage [10,11,38,39,40] has emerged to embrace the promising sparse training paradigm. SNIP [10] finds the sparse masks based on the saliency score of each weight that is obtained after training the dense model for only a few iterations.…”
Section: Pruning At An Early Stagementioning
confidence: 99%
“…The fixed-mask approach [9,10,45,46,47] has been proposed to decouple pruning and training such that after pruning, the sparse model training can be executed on edge devices. SNIP [9] preserves the loss after pruning based on connection sensitivity.…”
Section: Sparse Training With Fixed Sparsity Maskmentioning
confidence: 99%