This paper describes an efficient numerical solution to speed up transient simulations of analog circuits on a many-core computer. The technique is based on an explicit integration method, parallelised on a multiprocessor architecture. Although the integration step is smaller than the required one by traditional simulation methods based on Newton-Raphson iterations, explicit methods do not require to compute complex calculations such us matrix factorizations, which lead to long CPU simulation times. The proposed technique has been implemented on a NVIDIA GPU and has been demonstrated simulating Gaussian filtering operations performed by a CMOS vision chip. These type of devices, which are used to perform computation on the edge, include built-in image processing functions, turning them into very complex and time consuming circuits during their design. The proposed method is faster that Ngspice for different image sizes, and for 128 x 128 pixels image size it achieves a speed up of two orders of magnitude.