To extract performance from supercomputers, programmers in the High Performance Computing (HPC) community are often required to use a combination of frameworks to take advantage of the multiple levels of parallelism. However, over the years, efforts have been made to simplify this situation by creating frameworks that can take advantage of multiple levels. This often means that the programmer has to learn a new library. On the other hand, there are frameworks that were created by extending the capabilities of established paradigms. In this paper, we explore one of this libraries, OpenMP Cluster. As its name implies, it extends the OpenMP API, which allows seasoned programmers to take advantage of their experience to use just one API to program in sharedmemory and distributed-memory parallelism. In this paper, we took an existing plasma physics code that was programmed with MPI+OpenMP and ported it over to OpenMP Cluster. We also show that under certain conditions, the performance of OpenMP Cluster is similar to that of the MPI+OpenMP code.