As the memory footprint of emerging applications keeps increasing, the address translation becomes a critical performance bottleneck due to frequent misses on TLB. In addition, the TLB miss penalty becomes more critical in modern computer systems because the levels of the hierarchical page table (a.k.a. radix page table) are increasing to extend address space. In order to reduce the TLB misses, modern highperformance processors employ a multi-level TLB structure that uses a large last-level TLB. Employing a large last-level TLB might reduce the TLB misses. However, its capacity is still limited, and it can incur chip area overheads. In this paper, we propose a PSE Pinning mechanism that provides a large PSE (Page Structure Entry) store by dedicating some space of the last-level cache for only storing the page structure entries. PSE Pinning is based on three key observations. First, memory-intensive applications suffer from frequent misses on the last-level cache. Thus, most space of the last-level cache is not well utilized. Second, most PSEs are fetched from the main memory during the page table walk process, meaning the cache lines for the PSEs are frequently evicted from the on-chip caches. Lastly, a small number of PSEs are frequently accessed while others are not. By exploiting these three observations, PSE Pinning pins the frequently accessed page structure entries to the last-level caches so that they can reside on the cache. Experimental results show that PSE Pinning improves the performance of memory-intensive workloads suffering from frequent L2 TLB misses by 7.8% on average.