2009
DOI: 10.1007/s10617-009-9044-4
|View full text |Cite
|
Sign up to set email alerts
|

Float-to-fixed and fixed-to-float hardware converters for rapid hardware/software partitioning of floating point software applications to static and dynamic fixed point coprocessors

Abstract: While hardware/software partitioning has been shown to provide significant performance gains, most hardware/software partitioning approaches are limited to partitioning computational kernels utilizing integers or fixed point implementations. Software developers often initially develop an application using floating point representations built-in to most programming languages and later convert the application to a fixed point representation-a potentially time consuming process. In this paper, we present the Ariz… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
4
0

Year Published

2012
2012
2023
2023

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 7 publications
(4 citation statements)
references
References 25 publications
0
4
0
Order By: Relevance
“…In all three architectures, we conservatively use 64-bit fixed-point numbers, with 32 bits for the fractional part. Since the data available from ECMWF models is in floatingpoint, we instantiate a floating-point to fixed-point converter in the FPGA for dynamic conversions in all three cases [8]. The slice usage and calculation latency estimates are shown for Virtex-5 XC5VLX devices.…”
Section: Design Considerationsmentioning
confidence: 99%
“…In all three architectures, we conservatively use 64-bit fixed-point numbers, with 32 bits for the fractional part. Since the data available from ECMWF models is in floatingpoint, we instantiate a floating-point to fixed-point converter in the FPGA for dynamic conversions in all three cases [8]. The slice usage and calculation latency estimates are shown for Virtex-5 XC5VLX devices.…”
Section: Design Considerationsmentioning
confidence: 99%
“…Most embedded systems, System-on-Chip (SoC) and transmission systems are implemented using either fixed point, floating point or hybrid number systems wherein fixed [1] [2] and floating point numbers [3] [4] can be used together in the same chip [5]- [7]. The IEEE754-1985 standard was released for binary floating point arithmetic with some new features such as better precision, range and accuracy [8].…”
Section: Introductionmentioning
confidence: 99%
“…The numbers that are used in many DSP and communication systems are scaled between [−1, 1). The [−1, 1) scaled version of equations (3) and (4) can be written as Equations (5) and (6)…”
Section: Introductionmentioning
confidence: 99%
“…Biasing is pipelined and performed in 4 cycles. The inverse process (unbiasing) requires an 8-bit addition to all decompressed values and requires one cycle.Float to fixed & fixed to float conversions: Converting from float to fixed point numbers and vice-versa is implemented as described in[35] requiring a single cycle.…”
mentioning
confidence: 99%