Proceedings. 2005 IEEE International Conference on Field-Programmable Technology, 2005.
DOI: 10.1109/fpt.2005.1568520
|View full text |Cite
|
Sign up to set email alerts
|

A parameterized floating-point exponential function for FPGAs

Abstract: Abstract-This article presents a generator of floating-point exponential operators targeting recent FPGAs with embedded memories and DSP blocks. A single-precision operator consumes just one DSP block, 18Kbits of dual-port memory, and 392 slices on Virtex-4. For larger precisions, a generic approach based on polynomial approximation is used and proves more resourceefficient than the literature. For instance a double-precision operator consumes 5 BlockRAM and 12 DSP48 blocks on Virtex-5, or 10 M9k and 22 18x18 … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
14
0
1

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 23 publications
(15 citation statements)
references
References 25 publications
0
14
0
1
Order By: Relevance
“…Since the NFG directly realizes the function table of a floating-point function using a piecewise-split EVMDD, it is more accurate than existing NFGs using polynomial approximation [2,3,6,20,24].…”
Section: Nfgs Based On Piecewise-split Evmddsmentioning
confidence: 99%
See 1 more Smart Citation
“…Since the NFG directly realizes the function table of a floating-point function using a piecewise-split EVMDD, it is more accurate than existing NFGs using polynomial approximation [2,3,6,20,24].…”
Section: Nfgs Based On Piecewise-split Evmddsmentioning
confidence: 99%
“…However, floating-point representation tends to produce complex and slow NFGs. Thus, the design of floating-point NFGs is especially hard, and only design methods for some numeric functions are known [2,3,6,20,24]. Since these design methods are intended only for specific functions, different functions need different design methods and architectures.…”
Section: Introductionmentioning
confidence: 99%
“…3. Since the proposed NFG directly realizes the function table of a floating-point function using an EVMDD, it is more accurate than existing NFGs using polynomial approximation [4,5,8,22,26]. 4.…”
Section: Example 4 By Realizing the Evmdd Inmentioning
confidence: 99%
“…However, floating-point representation tends to produce complex and slow NFGs. Thus, the design of floating-point NFGs is especially hard, and only design methods for some elementary functions are known [4,5,8,22,26]. Since these design methods are intended only for specific functions, different functions need different design methods and architecturers.…”
Section: Introductionmentioning
confidence: 99%
“…De plus, il sera possible de le tailler au plus juste pour l'application. Le présent article, qui reprend et étend des publications précédentes (Detrey et al, 2005c ;Detrey et al, 2005b ;Detrey et al, 2006), montre que cette flexibilité du FPGA lui permet alors de surpasser en débit l'implé-mentation dans le processeur. Pour la simple précision, on peut ainsi obtenir un débit dix fois plus élevé, là où les opérateurs de base avaient un débit dix fois plus faible.…”
Section: Le Calcul Des Fonctions éLémentaires En Virgule Flottanteunclassified