2020 IEEE International Solid- State Circuits Conference - (ISSCC) 2020
DOI: 10.1109/isscc19947.2020.9063000
|View full text |Cite
|
Sign up to set email alerts
|

14.1 A 510nW 0.41V Low-Memory Low-Computation Keyword-Spotting Chip Using Serial FFT-Based MFCC and Binarized Depthwise Separable Convolutional Neural Network in 28nm CMOS

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

1
28
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 55 publications
(29 citation statements)
references
References 1 publication
1
28
0
Order By: Relevance
“…Table 1 summarizes the comparison of our work with other recent KWS accelerators. Our design achieves 2.3–6.8X power savings compared to Shan et al (2020) among KWS accelerators. If we scale the area of our design to 28 nm it would be 0.37 mm 2 which is still slightly higher than Shan et al (2020) .…”
Section: Discussionmentioning
confidence: 97%
See 1 more Smart Citation
“…Table 1 summarizes the comparison of our work with other recent KWS accelerators. Our design achieves 2.3–6.8X power savings compared to Shan et al (2020) among KWS accelerators. If we scale the area of our design to 28 nm it would be 0.37 mm 2 which is still slightly higher than Shan et al (2020) .…”
Section: Discussionmentioning
confidence: 97%
“…Our design achieves 2.3–6.8X power savings compared to Shan et al (2020) among KWS accelerators. If we scale the area of our design to 28 nm it would be 0.37 mm 2 which is still slightly higher than Shan et al (2020) . The higher area usage of our work is possibly because it does not adopt time-sharing in neuron hardware.…”
Section: Discussionmentioning
confidence: 97%
“…However, most of the existing KWS and SV wake-up systems, limited by the high power consumption in feature extraction and recognition, can only work on the cloud [1]- [4]. MFCCs are often used in voice signal processing tasks [5]- [7], which require fast Fourier transform (FFT), discrete cosine transform (DCT), and other operations consuming lots of power despite sufficient information contained. According to the state-ofthe-art ASICs for KWS and SV, the MFCC feature extraction occupies almost 40 % (~8 μW) of the total power consumption [6].…”
Section: Introductionmentioning
confidence: 99%
“…However, massive computations and parameters are the bottlenecks of its low power implementation. Although the power consumption of [7], [8] is ultra-low, they complete VAD and KWS containing only two classes.…”
Section: Introductionmentioning
confidence: 99%
“…In particular, MobileNet [14], [15] can greatly reduce the number of parameters and the computational burden by means of a separable convolution, which divides a three-dimensional (height, width, channel) filter into two simple filters, i.e., a one-dimensional (1, 1, channel) filter and a two-dimensional (height, width, 1) filter. Both MobileNet and ResNet are commonly utilized in recent analog neuron hardware [12], [16]. The classification accuracy of a recent version of MobileNet [15] (MobileNetV2) has been improved by adding a linear projection layer and through inter-channel expansion.…”
Section: Introductionmentioning
confidence: 99%