2020 25th Asia and South Pacific Design Automation Conference (ASP-DAC) 2020
DOI: 10.1109/asp-dac47756.2020.9045192
|View full text |Cite
|
Sign up to set email alerts
|

An Energy-Efficient Quantized and Regularized Training Framework For Processing-In-Memory Accelerators

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
9
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 30 publications
(9 citation statements)
references
References 12 publications
0
9
0
Order By: Relevance
“…PIM for DL training. Another body of works leverages PIM techniques to accelerate DL training [196,[247][248][249][250][251][252][253][254][255][256][257][258]. These works mainly utilize the analog computation capabilities of non-volatile memory (NVM) technologies to implement training of deep neural networks [247-250, 252, 254, 255, 257].…”
Section: Related Workmentioning
confidence: 99%
“…PIM for DL training. Another body of works leverages PIM techniques to accelerate DL training [196,[247][248][249][250][251][252][253][254][255][256][257][258]. These works mainly utilize the analog computation capabilities of non-volatile memory (NVM) technologies to implement training of deep neural networks [247-250, 252, 254, 255, 257].…”
Section: Related Workmentioning
confidence: 99%
“…Other notable AiMC designs are not limited by such constrains, and still allow lexibility regarding the quantization range of the ADC. As an example, [34] uses a current-sensing approach based on an RRAM array whose load resistance consists of a programmable RRAM cell used to dynamically rescale the summation line current range to a ixed voltage range. Similarly, [32] is a charge-discharge based SRAM AiMC design that includes a conigurable replica SRAM column used to provide a voltage reference to dynamically change the quantization range of the ADC.…”
Section: Range Determinationmentioning
confidence: 99%
“…Hardware based methods to control the scaling factor of AiMC are not limited to charge based implementations. Notably, the current-sensing approach based on an RRAM array introced in [34] uses a load resistance consisting of a programmable RRAM cell, to dynamically re-scale the summation line current range to a ixed voltage range. This observation, along with previously mentioned approaches, testiie that dynamic scaling can be generalized, across a wide range of device types, for both current-sensing and charge based AiMC implementations.…”
Section: Hardware Supported Scalingmentioning
confidence: 99%
“…However, such methods will be critically influenced by the extreme data, and discard abundant information on small but major values. Some researchers use nonlinear activation quantization methods to compensate for the extreme values and obtain better performance, but require to generate nonuniform reference signals, which introduce a complex fabrication process and are not friendly for the hardware implementation of ADCs/DACs (Sun et al, 2020). Therefore, a uniform and clipped activation quantization strategy is used to better match the characteristics of ADC/DAC implementations and ease the hardware design.…”
Section: Quantization Precision and Hardware Overheadmentioning
confidence: 99%
“…The sparse characteristics of the parameters in the neural networks widely existed due to the advantages of the rectified linear unit (ReLU) activation function and regularization training methods. Enhancing the sparsity of the weights and the activations can be considered equally as increasing the ratio of highresistance-state devices and low-amplitude voltage signals in the hardware implementation, which dominate in reducing the energy consumption (Sun et al, 2020). However, the sparsity of the activations can be further utilized to reduce the required ADC precision and the corresponding energy consumption.…”
Section: Introductionmentioning
confidence: 99%