2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2017
DOI: 10.1109/icassp.2017.7953288
|View full text |Cite
|
Sign up to set email alerts
|

LogNet: Energy-efficient neural networks using logarithmic computation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
71
0

Year Published

2017
2017
2022
2022

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 123 publications
(71 citation statements)
references
References 6 publications
0
71
0
Order By: Relevance
“…For instance, using 4 bits in linear quantization results in a 27.8% loss in accuracy versus a 5% loss for log base-2 quantization for VGG-16 [117]. Furthermore, when weights are quantized to powers of two, the multiplication can be replaced with a bitshift [122,135].…”
Section: A Reduce Precisionmentioning
confidence: 99%
“…For instance, using 4 bits in linear quantization results in a 27.8% loss in accuracy versus a 5% loss for log base-2 quantization for VGG-16 [117]. Furthermore, when weights are quantized to powers of two, the multiplication can be replaced with a bitshift [122,135].…”
Section: A Reduce Precisionmentioning
confidence: 99%
“…While logarithmic representation can also be used for activations, this has yet to be explored. LogNet's authors quantised CNNs with weights encoded in a four-bit logarithmic format, a er which they performed retraining to recover some lost accuracy [76]. eir experiments with the ImageNet dataset revealed 4.9 pp and 4.6 pp top-ve accuracy drops for AlexNet and VGG16, respectively.…”
Section: 3mentioning
confidence: 99%
“…In custom hardware, a multiplication between an exponentially quantised weight parameter and an activation can be implemented cheaply using a variable-length binary shi er. With LogNet, CNN inference is performed on FPGAs with four-bit logarithmic-quantised weights [76]. Experiments with three convolutional layers showed an over-3.0× energy e ciency improvement vs an Nvidia Titan X GPU implementation, while a four-bit logarithmic implementation of AlexNet demonstrated an around-5 pp accuracy loss for ImageNet.…”
Section: 32mentioning
confidence: 99%
“…Quantization, activation pruning are common practices in CNNs to reduce overheads associated with arithmetic operations. Different encoding based quantization schemes can be found in the literature, such as fixed-point linear quantization, logarithmic quantization, and binarization [ 33 , 34 , 35 ]. CNN accelerators supporting low precision can result in a resource-efficient solution.…”
Section: Introductionmentioning
confidence: 99%