With the advent of complex modern architectures, the low-level paradigms long considered sufficient to build High Performance Computing (HPC) numerical codes have met their limits. Achieving efficiency, ensuring portability, while preserving programming tractability on such hardware prompted the HPC community to design new, higher level paradigms while relying on runtime systems to maintain performance. However, the common weakness of these projects is to deeply tie applications to specific expert-only runtime system APIs. The OpenMP specification, which aims at providing common parallel programming means for shared-memory platforms, appears as a good candidate to address this issue thanks to the latest task-based constructs introduced in its revision 4.0. The goal of this paper is to assess the effectiveness and limits of this support for designing a high-performance numerical library, ScalFMM, implementing the fast multipole method (FMM) that we have deeply re-designed with respect to the most advanced features provided by OpenMP 4. We show that OpenMP 4 allows for significant performance improvements over previous OpenMP revisions on recent multicore processors and that extensions to the 4.0 standard allow for strongly improving the performance, bridging the gap with the very high performance that was so far reserved to expert-only runtime system APIs.Index Terms-high performance computing, fast multipole method, runtime system, OpenMP, compiler, parallel programming model, priority, commutativity, multicore architecture ! Algorithm 5: tb-omp4#task#dep scheme with OPENMP 4.0 directives 1 function FMM(tree, kernel) 2 #pragma omp parallel 3 #pragma omp single 4 // Near-field 5 P2P_taskdep(tree, kernel); 18 foreach cell cl in tree.cells[level] do 19 #pragma omp task depend(inout:cl.multipole) \\ 20 depend(in:tree.getChildren(cl.mindex, level).multipole) 21 kernel.M2M(cl.multipole, tree.getChildren(cl.mindex, level).multipole); STARPU directives 1 function FMM(tree, kernel) 2 // Near-field 3 P2P_starpu(tree, kernel);