HPC offers tremendous potential to process large amounts of data often termed as big data. Distributing data efficiently and leveraging specialised hardware (e.g., accelerators) are critical in order to best utilise HPC platforms constituting of heterogeneous and distributed systems. In this paper, we develop a portable, high-level paradigm for such systems to run big data applications, more specifically, graph analytics applications popular in the big data and machine learning communities. Using our paradigm, we accelerate three real-world, compute and data intensive, graph analytics applications: a function call graph similarity application, a triangle enumeration subroutine, and a graph assaying application. Our paradigm utilises the MapReduce framework, Apache Spark, in conjunction with CUDA and simultaneously takes advantage of automatic data distribution and accelerator on each node of the system. We demonstrate scalability and parameter space exploration and offer a portable solution to leverage almost any legacy, current, or next-generation HPC or cloud-based system.