2021
DOI: 10.1109/access.2021.3107370
|View full text |Cite
|
Sign up to set email alerts
|

Low Error Efficient Approximate Adders for FPGAs

Abstract: In this paper, we propose a methodology for designing low error efficient approximate adders for FPGAs. The proposed methodology utilizes FPGA resources efficiently to reduce the error of approximate adders. We propose two approximate adders for FPGAs using our methodology: low error and area efficient approximate adder (LEADx), and area and power efficient approximate adder (APEx). Both approximate adders are composed of an accurate and an approximate part. The approximate parts of these adders are designed i… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
2
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
8
2

Relationship

0
10

Authors

Journals

citations
Cited by 23 publications
(5 citation statements)
references
References 29 publications
0
2
0
Order By: Relevance
“…Chowdari et al, 30 have presented the Systolic design for adaptive block FIR filter throughput utilizing distributed arithmetic. The ECS Journal of Solid State Science and Technology, 2023 12 097002 distributed arithmetic design FIR filter utilizing blocks least mean square depending upon systolic array along parallel data processing units.…”
Section: Literature Surveymentioning
confidence: 99%
“…Chowdari et al, 30 have presented the Systolic design for adaptive block FIR filter throughput utilizing distributed arithmetic. The ECS Journal of Solid State Science and Technology, 2023 12 097002 distributed arithmetic design FIR filter utilizing blocks least mean square depending upon systolic array along parallel data processing units.…”
Section: Literature Surveymentioning
confidence: 99%
“…This makes the training process computationally intensive and requires efficient strategies, such as "factorized sampling," to sample a subset of permutations during each training iteration. Another difficulty in training XLNet is the need for largescale computing resources [85], [268], [269]. The vast number of possible permutations and the large model size contribute to increased memory and computation requirements.…”
Section: B Training Of Llmsmentioning
confidence: 99%
“…Balasubramanian et al [17] proposed a reducederror approximation adder using improved hardware. In their work, Ahmad et al [18] designed two approximate adders with reduced errors by using the FPGA resources. Although approximation computing may be accomplished in all aspects of the computer, from software to circuits, the primary focus of this research will be on ET adders [19][20][21][22][23][24][25].…”
Section: Literature Reviewmentioning
confidence: 99%