2020
DOI: 10.1007/s42452-020-2667-6
|View full text |Cite
|
Sign up to set email alerts
|

GPU accelerated circuit analysis using machine learning-based parallel computing model

Abstract: Circuit simulators have the capability to create virtual environment to test circuit design. Simulators save time and hardware cost. However, when components in circuit design increase, most simulators take longer time to test large circuit design, in many cases days or even weeks. Therefore, to handle large dataset and accurate performance, simulators need to be improved. In this paper, we propose machine learning-based parallel implementations of circuit analyser on graphics card with Compute Unified Device … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 18 publications
0
2
0
Order By: Relevance
“…This is practically no longer the case today, and the new graphics cards HW includes double-precision computation units and can easily compute in double precision. It is only necessary to realize that those operations will be proportionally slower in performance than in single precision [15,28,29]. The graphic card used in simulations in this article, NVIDIA GeForce RTX 2080 Ti has an estimated performance by the manufacturer in double-precision (64-bit) floating to 420.2 GFLOPS but for half-precision FP16 16-bit (Half Precision) it is incredible 26.90 TFLOPS [30].…”
Section: Parallelizationmentioning
confidence: 99%
“…This is practically no longer the case today, and the new graphics cards HW includes double-precision computation units and can easily compute in double precision. It is only necessary to realize that those operations will be proportionally slower in performance than in single precision [15,28,29]. The graphic card used in simulations in this article, NVIDIA GeForce RTX 2080 Ti has an estimated performance by the manufacturer in double-precision (64-bit) floating to 420.2 GFLOPS but for half-precision FP16 16-bit (Half Precision) it is incredible 26.90 TFLOPS [30].…”
Section: Parallelizationmentioning
confidence: 99%
“…However, the biggest challenge of maintaining this HPC technology is very costly (Netto et al, 2018) and often this technology is played by large giant companies such as Google, and Amazon. To handle this cost issue, the deployment of Graphic Processing Units (GPU) is much more treasured as this hardware proposes the similar parallelism architecture as HPC, and as we aware very consumer cost friendly (Hasan & Chakraborty, 2021;Jagtap & Rao, 2020) and easily accessible (Chen et al, 2014).…”
Section: Introductionmentioning
confidence: 99%