2020
DOI: 10.1109/tvlsi.2020.3004602
|View full text |Cite
|
Sign up to set email alerts
|

PLAC: Piecewise Linear Approximation Computation for All Nonlinear Unary Functions

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
21
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
7
1

Relationship

1
7

Authors

Journals

citations
Cited by 49 publications
(29 citation statements)
references
References 27 publications
0
21
0
Order By: Relevance
“…However, the calculation of the exhaustion method is time consuming, resulting in the segmenter running on the software for hours or even days. Therefore, we optimize the calculation process of the segmenter based on the method proposed in [9].…”
Section: Minimization Of the Segments Number With A Given Precisionmentioning
confidence: 99%
See 2 more Smart Citations
“…However, the calculation of the exhaustion method is time consuming, resulting in the segmenter running on the software for hours or even days. Therefore, we optimize the calculation process of the segmenter based on the method proposed in [9].…”
Section: Minimization Of the Segments Number With A Given Precisionmentioning
confidence: 99%
“…In order to solve the problem of too high a delay in the above indexing method, this paper proposes a new indexing method based on [8,9], as shown in the left side of Figure 7. For a function with n segments, the input is simultaneously compared with the starting points S 2 , S 3 , .…”
Section: Test Of Segmenter Performancementioning
confidence: 99%
See 1 more Smart Citation
“…In this case, hardware implementation is based on several parallel comparators, single multiply addition, and storage of slope, intercept, and endpoints for each segment. In [38], with 15 segments, a mean approximation error of 1.2 × 10 −4 is achieved to satisfy the 12-bit fractional fixed-point format. However, the available representation in the current design has an 8bit fractional length that corresponds to 3.9 × 10 −3 level accuracy.…”
Section: Logarithm Compressionmentioning
confidence: 99%
“…However, recent applications of ANNs, e.g., IoT, medical systems, and telecommunication, require platforms with high throughput and the capacity to execute the algorithms in real-time. An attractive solution is the development of hardware neuronal networks (HNN) in Field-Programmable Gate Arrays (FPGAs) [15][16][17][18][19][20][21]. In this regard, the FPGA-based implementation of AFs in HNN is one of the challenges for embedded system design according to recent studies; this is because the AF implementations require low hardware resources and low power consumption [1,2,5,12,[22][23][24][25].…”
Section: Introductionmentioning
confidence: 99%