Circuit simulators have the capability to create virtual environment to test circuit design. Simulators save time and hardware cost. However, when components in circuit design increase, most simulators take longer time to test large circuit design, in many cases days or even weeks. Therefore, to handle large dataset and accurate performance, simulators need to be improved. In this paper, we propose machine learning-based parallel implementations of circuit analyser on graphics card with Compute Unified Device Architecture (CUDA). After parsing netlist file, the first approach is to analyse compute intensive mathematical functions and then convert it into parallel executable version. Further, we propose a Design-Level Parallelism with hybrid parallel implementation of components and processing methods. Dynamic decision-making is required to select functions and parameters to map on Graphics Processing Unit (GPU). To reduce load overhead, machine learning clustering approach has been adopted. Combination of procedure clustering and mapping takes few cycles but overall performance enhances efficiency as compared to serial processing.