2022 IEEE International Solid- State Circuits Conference (ISSCC) 2022
DOI: 10.1109/isscc42614.2022.9731668
|View full text |Cite
|
Sign up to set email alerts
|

Hiddenite: 4K-PE Hidden Network Inference 4D-Tensor Engine Exploiting On-Chip Model Construction Achieving 34.8-to-16.0TOPS/W for CIFAR-100 and ImageNet

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
2
1

Relationship

2
4

Authors

Journals

citations
Cited by 16 publications
(16 citation statements)
references
References 5 publications
0
6
0
Order By: Relevance
“…To further verify the generalization ability of the D-Net algorithm model, we used the fashion_mnist dataset (Khanday et al, 2021) and the cifar_100 dataset (Hirose et al, 2022) for further experiments.…”
Section: Public Datasetsmentioning
confidence: 99%
“…To further verify the generalization ability of the D-Net algorithm model, we used the fashion_mnist dataset (Khanday et al, 2021) and the cifar_100 dataset (Hirose et al, 2022) for further experiments.…”
Section: Public Datasetsmentioning
confidence: 99%
“…The SLTH does not only merge training and pruning, but also quantization: strong lottery tickets have been found to be robust to binarization [34], [41], sparser when using ternary masks [42], and more accurate with a scalar mask [43]. Furthermore, sparse random weights and binary masks can be exploited for designing energy-efficient inference hardware [4], which can even switch the binary mask for adjusting computational cost at the edge [4] or for reusing the same random weights for a different task [44].…”
Section: ) the Strong Lottery Ticket Hypothesismentioning
confidence: 99%
“…All weights and biases are counted as occupying 32 bit. However, it is not necessary to store weights in the case of supermask training, since they can be generated on the fly from the original seed with a random number generator [1], [4]. Furthermore, this seed can be substituted with a hash of other model parameters [4], so it is not necessary to store it either.…”
Section: E Model Compression Schemementioning
confidence: 99%
See 2 more Smart Citations