2022 IEEE International Symposium on Circuits and Systems (ISCAS) 2022
DOI: 10.1109/iscas48785.2022.9937284
|View full text |Cite
|
Sign up to set email alerts
|

Unstructured Weight Pruning in Variability-Aware Memristive Crossbar Neural Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2
2
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(2 citation statements)
references
References 30 publications
0
1
0
Order By: Relevance
“…While these techniques are typically not the most effective in literature, they de facto nullify the necessity of designing ad hoc implementations for the pruned model, resulting in totally hardware-friendly methods. In contrast, non-structured pruning methods [18], [19] allow the removal of single interconnections (and, consequentially, of the related weights), granting more degrees of freedom 3 Many non-linear strategies have also been tested to transition from β = 1 to β = 0 and each of them has shown a different impact on the trend of the validation accuracy during training. However, every strategy seems to yield the same final accuracy.…”
Section: Pruning Techniques Classificationmentioning
confidence: 99%
“…While these techniques are typically not the most effective in literature, they de facto nullify the necessity of designing ad hoc implementations for the pruned model, resulting in totally hardware-friendly methods. In contrast, non-structured pruning methods [18], [19] allow the removal of single interconnections (and, consequentially, of the related weights), granting more degrees of freedom 3 Many non-linear strategies have also been tested to transition from β = 1 to β = 0 and each of them has shown a different impact on the trend of the validation accuracy during training. However, every strategy seems to yield the same final accuracy.…”
Section: Pruning Techniques Classificationmentioning
confidence: 99%
“…While these techniques are typically not the most effective in literature, they de facto nullify the necessity of designing ad hoc implementations for the pruned model, resulting in totally hardware friendly methods. Conversely, non-structured pruning methods [15], [16] allow the removal of single interconnections (and, consequentially, of the related weights), granting more degrees of freedom to the pruning process and resulting in better compression results. Nonetheless, this comes with an inhardware computational/memory overhead due to the encoding of the resulting sparse weights matrices [17]- [19].…”
Section: Pruning Techniques Classificationmentioning
confidence: 99%