Proceedings of the 39th International Conference on Computer-Aided Design 2020
DOI: 10.1145/3400302.3415773
|View full text |Cite
|
Sign up to set email alerts
|

Opportunities for RTL and gate level simulation using GPUs

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 11 publications
(2 citation statements)
references
References 11 publications
0
2
0
Order By: Relevance
“…Zhang [43] called for a renewal in GPU-accelerated RTL simulation research by leveraging recent advances in GPU-compute APIs designed for machine learning.…”
Section: Parallel Rtl Simulationmentioning
confidence: 99%
“…Zhang [43] called for a renewal in GPU-accelerated RTL simulation research by leveraging recent advances in GPU-compute APIs designed for machine learning.…”
Section: Parallel Rtl Simulationmentioning
confidence: 99%
“…EDA tools typically involve solving large-scale optimization problems with heavy numerical computation, especially at the physical design stage, and extensive work is devoted to accelerating these solvers with modern parallel computing hardware like multicore CPUs or GPUs [26,29,98]. Many recent studies have explored GPU's opportunity in EDA problems [59,91,167,168]. Still, developing good GPU implementation of EDA algorithms is challenging.…”
Section: Acceleration With Deep Learning Enginementioning
confidence: 99%