2019 IEEE 9th International Conference on Consumer Electronics (ICCE-Berlin) 2019
DOI: 10.1109/icce-berlin47944.2019.8966187
|View full text |Cite
|
Sign up to set email alerts
|

Fast and Light-weight Binarized Neural Network Implemented in an FPGA using LUT-based Signal Processing and its Time-domain Extension for Multi-bit Processing

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 12 publications
(8 citation statements)
references
References 3 publications
0
8
0
Order By: Relevance
“…A number of implementations for AI devices on FPGA were so far proposed 2–6 ; there are mainly two approaches. With the first approach, input to Neural Network (NN) and summation of products of weights are realized using a multiplier.…”
Section: Preliminarymentioning
confidence: 99%
See 3 more Smart Citations
“…A number of implementations for AI devices on FPGA were so far proposed 2–6 ; there are mainly two approaches. With the first approach, input to Neural Network (NN) and summation of products of weights are realized using a multiplier.…”
Section: Preliminarymentioning
confidence: 99%
“…A number of implementations for AI devices on FPGA were so far proposed [2][3][4][5][6] ; there are mainly two approaches.…”
Section: Ai Devicesmentioning
confidence: 99%
See 2 more Smart Citations
“…Therefore, one can assume that the proposed method can be applied universally as long as two redundant NNs are realized via combinatorial circuits. There are a number of methods for combinatorial circuit implementation of NN; particularly, one can think of methods based on the NN implementation in this study or LUT‐network reported in literature 20 …”
Section: Evaluation Experimentsmentioning
confidence: 99%