2019 2nd Workshop on Energy Efficient Machine Learning and Cognitive Computing for Embedded Applications (EMC2) 2019
DOI: 10.1109/emc249363.2019.00012
|View full text |Cite
|
Sign up to set email alerts
|

Integrating NVIDIA Deep Learning Accelerator (NVDLA) with RISC-V SoC on FireSim

Abstract: NVDLA is an open-source deep neural network (DNN) accelerator which has received a lot of attention by the community since its introduction by Nvidia. It is a full-featured hardware IP and can serve as a good reference for conducting research and development of SoCs with integrated accelerators. However, an expensive FPGA board is required to do experiments with this IP in a real SoC. Moreover, since NVDLA is clocked at a lower frequency on an FPGA, it would be hard to do accurate performance analysis with suc… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
8
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
5
4
1

Relationship

0
10

Authors

Journals

citations
Cited by 45 publications
(14 citation statements)
references
References 10 publications
0
8
0
Order By: Relevance
“…Our framework does not target an FPGA-accelerated environment such as Firesim [21]. FireSim has also been integrated with the NVDLA accelerator [14], which requires editing the RTL code of the SoC. gem5+rtl is less accurate at the entire SoC level, but in exchange it offers more flexibility with a generic full-system simulator that is easier to modify and work with.…”
Section: Fpga-accelerated Solutionsmentioning
confidence: 99%
“…Our framework does not target an FPGA-accelerated environment such as Firesim [21]. FireSim has also been integrated with the NVDLA accelerator [14], which requires editing the RTL code of the SoC. gem5+rtl is less accurate at the entire SoC level, but in exchange it offers more flexibility with a generic full-system simulator that is easier to modify and work with.…”
Section: Fpga-accelerated Solutionsmentioning
confidence: 99%
“…Work in this area can be partitioned into two subareas, roughly along the dimension of generality. The first, and more general, involves the design and testing of domain-specific accelerators (e.g., GEMM accelerators such as Gemmini [43] and NVDLA [47]) as custom instruction set architectures (ISAs). These ISAs are conceptually not so different from general-purpose compute architectures (in that they are programmable) except insofar as they prioritize a limited number of operations, particularly those pertinent to AI workloads (such as matrix multiplication).…”
Section: Lightweight Ai Capability For Future Detector Systemsmentioning
confidence: 99%
“…In [3] the authors presented LACore, a novel, programmable accelerator architecture for general-purpose linear algebra applications suitable for the RISC-V new generation of processor cores. In [4], the authors integrate Nvidia Deep Learning Accelerator (NVDLA) into a RISC-V SoC. NVDLA, is an open-source deep neural network (DNN) accelerator which has received a lot of attention by the research community since its introduction by Nvidia.…”
Section: Related Workmentioning
confidence: 99%