2022 IEEE 29th Symposium on Computer Arithmetic (ARITH) 2022
DOI: 10.1109/arith54963.2022.00019
|View full text |Cite
|
Sign up to set email alerts
|

PERCIVAL: Open-Source Posit RISC-V Core With Quire Capability

Abstract: The posit representation for real numbers is an alternative to the ubiquitous IEEE 754 floating-point standard. In this work, we present PERCIVAL, an application-level posit RISC-V core based on CVA6 that can execute all posit instructions, including the quire fused operations. This solves the obstacle encountered by previous works, which only included partial posit support or which had to emulate posits in software. In addition, Xposit, a RISC-V extension for posit instructions is incorporated into LLVM. Ther… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
1
1
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 0 publications
0
2
0
Order By: Relevance
“…Additionally, the posit format does not include redundancy representations or overflow/underflow cases during operations [50]. Some studies [51][52][53][54] have confirmed that, for some applications, like convolutional neural networks or CNNs, the posit format outperforms the FP one in terms of accuracy. Posit achieves superior accuracy in the range near 1.0, where most computations occur, making it very attractive for deep learning applications.…”
Section: Posit Formatmentioning
confidence: 99%
“…Additionally, the posit format does not include redundancy representations or overflow/underflow cases during operations [50]. Some studies [51][52][53][54] have confirmed that, for some applications, like convolutional neural networks or CNNs, the posit format outperforms the FP one in terms of accuracy. Posit achieves superior accuracy in the range near 1.0, where most computations occur, making it very attractive for deep learning applications.…”
Section: Posit Formatmentioning
confidence: 99%
“…To address IEEE-754 standard limitations, new computer formats offer different trade-offs, such as Bfloat16 [2], [34], Tapered Floating-Point (TFP) [46], Posit [24], and FP8-E4M3 and FP8alt-E5M2 formats [45]. Studies compare these formats in terms of circuit area and numerical stability [3], [10], [11], [16], [31], [42], [44], [53], [56], [62].…”
Section: Introductionmentioning
confidence: 99%