2022
DOI: 10.1016/j.neunet.2021.11.029
|View full text |Cite
|
Sign up to set email alerts
|

LOss-Based SensiTivity rEgulaRization: Towards deep sparse neural networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
11
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
3
2

Relationship

1
7

Authors

Journals

citations
Cited by 22 publications
(16 citation statements)
references
References 15 publications
1
11
0
Order By: Relevance
“…An effective way to avoid this issue is presented in [17], where the idea of output-based sensitivity is introduced: weights are selectively penalized depending on their capability to induce variations on network outputs when changed. A refinement of this method is presented in [18], where state-ofthe-art results are reached in image classification thanks to the introduction of loss-based sensitivity, which aims at shrinking weights which are contributing the least to the final loss value.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…An effective way to avoid this issue is presented in [17], where the idea of output-based sensitivity is introduced: weights are selectively penalized depending on their capability to induce variations on network outputs when changed. A refinement of this method is presented in [18], where state-ofthe-art results are reached in image classification thanks to the introduction of loss-based sensitivity, which aims at shrinking weights which are contributing the least to the final loss value.…”
Section: Related Workmentioning
confidence: 99%
“…A drawback of these methods is that neural weights' norms are all driven close to zero without taking account of weight relevance in the neural architecture, as discussed in detail in [17,18].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…A number of approaches have been proposed, especially in the recent years [10,12,15]. One recent state-of-the-art approach, LOBSTER [23] proposes to penalize the parameters by their gradient-weighted 2 norm, leading to the update rule…”
Section: Enforcing Rem With Pruningmentioning
confidence: 99%
“…Historically techniques based on the L2 regularization are among the most popular ways to achieve highly sparsifying models. A central drawback of such algorithms is that they do not directly account for weight relevance in the neural architecture (see [3,4]), but the entire set of parameters is forced towards very small values.…”
Section: Introductionmentioning
confidence: 99%