The next frontier of high performance computing is the Exascale, and this will certainly stand as a noteworthy step in the quest for processing speed potential. In fact, we always get a fraction of the technically available computing power (so-called theoretical peak), and the gap is likely to go hand-to-hand with the hardware complexity of the target system. Among the key aspects of this complexity, we have: the heterogeneity of the computing units, the memory hierarchy and partitioning including the non-uniform memory access (NUMA) configuration, and the interconnect for data exchanges among the computing nodes. Scientific investigations and cutting-edge technical activities should ideally scale-up with respect to sustained performance. The special case of quantitative approaches for solving (large-scale) problems deserves a special focus. Indeed, most of common real-life problems, even when considering the artificial intelligence paradigm, rely on optimization techniques for the main kernels of algorithmic solutions. Mathematical programming and pure combinatorial methods are not easy to implement efficiently on large-scale supercomputers because of irregular control flow, complex memory access patterns, heterogeneous kernels, numerical issues, to name a few. We describe and examine our thoughts from the standpoint of large-scale supercomputers.