2015 49th Asilomar Conference on Signals, Systems and Computers 2015
DOI: 10.1109/acssc.2015.7421167
|View full text |Cite
|
Sign up to set email alerts
|

A low power radix-2 FFT accelerator for FPGA

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
10
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 14 publications
(10 citation statements)
references
References 9 publications
0
10
0
Order By: Relevance
“…The fact that this device uses a NAS to decompose the audio into frequency bands instead of using a Fourier Transform leads to having a lower power consumption. As it is presented in [44], a low-power radix-2 FFT accelerator for FPGA achieves a power consumption of 125 mW; however, the NAS' is only 29.7 mW [22], which is less than 24% of the power consumption of the FFT. Additionally, the NAS could interface directly with Spiking Convolutional Neural Networks (SCNN) without the need of the segmentation of the information and the sonogram generation, processing the auditory information in a continuous way.…”
Section: Discussionmentioning
confidence: 99%
“…The fact that this device uses a NAS to decompose the audio into frequency bands instead of using a Fourier Transform leads to having a lower power consumption. As it is presented in [44], a low-power radix-2 FFT accelerator for FPGA achieves a power consumption of 125 mW; however, the NAS' is only 29.7 mW [22], which is less than 24% of the power consumption of the FFT. Additionally, the NAS could interface directly with Spiking Convolutional Neural Networks (SCNN) without the need of the segmentation of the information and the sonogram generation, processing the auditory information in a continuous way.…”
Section: Discussionmentioning
confidence: 99%
“…Hundreds of research implementations [22,56] and commercial implementations [1,[3][4][5][6]22] of FFT accelerators exist intended both as stand-alone accelerators, and to be integrated in larger accelerators [125]. Work on supporting FFT acceleration exists for FPGAs [94], GPUs [84] and specialized architectures from linear algebra cores [104] to CGRAs [67,85], machine-learning accelerators [50,80,117], optical computers [78] and sonic computers [103].…”
Section: Fft Acceleratorsmentioning
confidence: 99%
“…HE multiply-and-accumulate unit (MAC) implements the equation Y=A×B+C and has a central role in several applications, including image and audio processing [1], [2], convolutional neural networks, and adaptive filtering [3]- [5]. This calls for optimized MAC implementations with reduced power and area.…”
Section: Introductionmentioning
confidence: 99%