Proceedings. IEEE Symposium on FPGAs for Custom Computing Machines (Cat. No.98TB100251)
DOI: 10.1109/fpga.1998.707941
|View full text |Cite
|
Sign up to set email alerts
|

A scaleable FIR filter using 32-bit floating-point complex arithmetic on a configurable computing machine

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(3 citation statements)
references
References 3 publications
0
3
0
Order By: Relevance
“…For a given required accuracy, this result can be used to predict the expected size of a set of multiplication units. For example, a typical FIR filter [22] requires many multiplication units; the average size of a multiplication unit serves as a good basis for the prediction of the size of such a filter.…”
Section: Theoremmentioning
confidence: 99%
“…For a given required accuracy, this result can be used to predict the expected size of a set of multiplication units. For example, a typical FIR filter [22] requires many multiplication units; the average size of a multiplication unit serves as a good basis for the prediction of the size of such a filter.…”
Section: Theoremmentioning
confidence: 99%
“…To avoid that compromise and to achieve less number of look-up tables, it is better to choose the proposed architecture which can directly perform arithmetic operations on the complex numbers that are represented using 16-bit subset of IEEE floating point format. Some works also tried to perform complex arithmetic using resource sharing and pipelining concepts, by using a single floating point adder and floating point multiplier for processing of both real and imaginary parts to implement a typical DSP benchmark like scalable FIR filter [4].…”
Section: Introductionmentioning
confidence: 99%
“…Floating point FIR filters have been analyzed in detail [29], the Fast-Fourier-Transform has received particular attention [8,19], and Lienhart et al perform an N-body simulation [22] with custom floating point numbers. In our context vector and matrix operations are of particular interest.…”
Section: Floating Point Numbers On Fpgasmentioning
confidence: 99%