We present a technique based on layering thread-local, sequential maps over variants of skip graphs in order to increase NUMA locality in concurrent data structures. Thread-local maps are used to "jump" into an underlying, concurrent skip graph near to where the insertions/removals/searches take place or complete. The skip graph is constrained in height and employs a data partitioning scheme that increases NUMA locality and reduces synchronization overhead. Our numbers indicate a 70% of reduction on the number of remote CAS operations, and a 41.4% increase in CAS success rate for 92 threads compared to skip lists in high contention.Besides, qualitatively speaking, our locality improvements are such that the larger the distance between two NUMA nodes, the bigger the reduction in remote accesses between threads pinned to those nodes. We implemented lazy and non-lazy variations of our technique, and our lazy version operates at least 80% faster under high-contention settings (32% of operations being successful updates on a 2 10 -sized structure), and at least 32% faster under low-contention/lowupdate settings (4% of operations being successful updates on a 2 17 -sized structure) with 96 threads. We also developed an additional skip graph variant, which we called sparse skip graph, that causes our thread-local maps as well as our shared structure to become simultaneously more sparse, performing more than 2.5 times faster than competing NUMA-aware approaches in lower-contention/low-update settings.