2010
DOI: 10.1145/1839480.1839486
|View full text |Cite
|
Sign up to set email alerts
|

VFloat

Abstract: Optimal reconfigurable hardware implementations may require the use of arbitrary floating-point formats that do not necessarily conform to IEEE specified sizes. We present a variable precision floating-point library (VFloat) that supports general floating-point formats including IEEE standard formats. Most previously published floating-point formats for use with reconfigurable hardware are subsets of our format. Custom datapaths with optimal bitwidths for each operation can be built using the variable precisio… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2012
2012
2023
2023

Publication Types

Select...
4
4

Relationship

0
8

Authors

Journals

citations
Cited by 42 publications
(4 citation statements)
references
References 40 publications
0
4
0
Order By: Relevance
“…al. [33], based on Hung's [16] approach, has reported a custom precision floatingpoint division on a Virtex-IIPro FPGA for a 41-bit (10-bit exp and 29-bit mantissa) floating point format. The area complexity is quite large, with a requirement of 62 BRAMs with 125 MHz frequency, and is further reported to have precision loss.…”
Section: Comparison With Series Expansion (Se) Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…al. [33], based on Hung's [16] approach, has reported a custom precision floatingpoint division on a Virtex-IIPro FPGA for a 41-bit (10-bit exp and 29-bit mantissa) floating point format. The area complexity is quite large, with a requirement of 62 BRAMs with 125 MHz frequency, and is further reported to have precision loss.…”
Section: Comparison With Series Expansion (Se) Methodsmentioning
confidence: 99%
“…This method requires large amounts of logic (area) in terms of memory and multipliers, but is better in terms of latency and performance vis-a-vis digit recurrence method. The approximation method comes in to play when the desired level of accuracy is low, and generally falls in two categories: Direct Approximation (using look-up tables) and linear/polynomial approximation (using small look-up tables and/or partial product arrays) [16,19,33]. All these methods primarily vary in terms of area, speed, latency and/or accuracy, and mainly targeted the normalized implementation.…”
Section: Introductionmentioning
confidence: 99%
“…• Rounding is required in order to trim back the 106-bit mantissa multiplication result to 53-bit only. This can be done as per IEEE standard [16]- [17].…”
Section: Floating Point Multiplicationmentioning
confidence: 99%
“…Most embedded systems, System-on-Chip (SoC) and transmission systems are implemented using either fixed point, floating point or hybrid number systems wherein fixed [1] [2] and floating point numbers [3] [4] can be used together in the same chip [5]- [7]. The IEEE754-1985 standard was released for binary floating point arithmetic with some new features such as better precision, range and accuracy [8].…”
Section: Introductionmentioning
confidence: 99%