“…The independent nature of a time integration step of an individual grid vertex, makes the method ideal for the massively parallel computing model [21]. In [22], [23], [24], and [25], general-purpose GPU computing (GPGPU) [26] has been utilized for solving the non-linear Schrödinger equation with promising results. Though this has only been done using Cartesian spatial discretization and in many cases by utilizing a ready-made linear algebra library.…”