2021 IEEE Computer Society Annual Symposium on VLSI (ISVLSI) 2021
DOI: 10.1109/isvlsi51109.2021.00056
|View full text |Cite
|
Sign up to set email alerts
|

A Microarchitecture Implementation Framework for Online Learning with Temporal Neural Networks

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

1
18
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
1

Relationship

1
4

Authors

Journals

citations
Cited by 6 publications
(19 citation statements)
references
References 19 publications
1
18
0
Order By: Relevance
“…In order to report the optimization gains presented in Section IV, following steps are adopted: 1) Genus is used to synthesize the original functional modules from [6] with the ASAP7 standard cell library and establish the baseline values; 2) TNN7 macro equivalent of the original modules are designed by either (i) structurally optimizing at the microarchitectural level, or (ii) creating mixed-signal circuits from scratch in Virtuoso; 3) Genus is used to resynthesize the modules by replacing the ASAP7 standard cells with the TNN7 .lib and .lef files (obtained from Liberate and Abstract), to obtain post-synthesis area, power and delay. These values are then compared against the ASAP7-based post-synthesis values to compute the corresponding improvements.…”
Section: B Methodologymentioning
confidence: 99%
See 4 more Smart Citations
“…In order to report the optimization gains presented in Section IV, following steps are adopted: 1) Genus is used to synthesize the original functional modules from [6] with the ASAP7 standard cell library and establish the baseline values; 2) TNN7 macro equivalent of the original modules are designed by either (i) structurally optimizing at the microarchitectural level, or (ii) creating mixed-signal circuits from scratch in Virtuoso; 3) Genus is used to resynthesize the modules by replacing the ASAP7 standard cells with the TNN7 .lib and .lef files (obtained from Liberate and Abstract), to obtain post-synthesis area, power and delay. These values are then compared against the ASAP7-based post-synthesis values to compute the corresponding improvements.…”
Section: B Methodologymentioning
confidence: 99%
“…These features make TNNs truly neuromorphic and therefore suitable for building extremely energy-efficient edgenative sensory processors for applications such as time-series clustering [1]. A microarchitecture framework for efficient CMOS implementation of TNNs has recently been proposed in [6]. The proposed implementation methodology utilizes two notions of temporal resolution and thereby hardware clocks: 1) unit clock serving as the finest temporal resolution to calibrate the spike timings within a single instance of input, and 2) a coarser resolution gamma clock to separate different input instances.…”
Section: Introductionmentioning
confidence: 99%
See 3 more Smart Citations