Abstract-Optical interconnects, which support the transport of large bandwidths over warehouse-scale distance, can help to further scale data-movement capabilities in high performance computing (HPC) platforms. However, due to the circuit switching nature of optical systems and additional peculiarities, such as sensitivity to temperature and the need for wavelength channel locking, optical links generally show longer link initialization delays. These delays are a major obstacle in exploiting the high bandwidth of optics for application speedups, especially when low-latency remote direct memory access (RDMA) is required or small messages are used.These limitations can be overcome by maintaining a set of frequently used optical circuits based on the temporal locality of the application and by maximizing the number of reuses to amortize initialization overheads. However, since circuits cannot be simultaneously maintained between all source-destination pairs, the set of selected circuits must be carefully managed. This paper applies techniques inspired by cache optimizations to intelligently manage circuit resources with the goal of maximizing the circuit successful 'hit' rate. We propose the concept of "circuit reuse distance" and design circuit replacement policies based on this metric. We profile the reuse distance based on a group of representative HPC applications with different communications patterns and show the potential to amortize circuit setup delay over multiple circuit requests. We then develop a Markov transition matrix based reuse distance predictor and two circuit replacement policies. The proposed predictor provides significantly higher accuracy than traditional maximum likelihood prediction and the two replacement policies are shown to effectively increase the hit rate compared to the Least Recently Used policy. We further investigate the tradeoffs between the realized hit rate and energy consumption. Finally, the feasibility of the proposed concept is experimentally demonstrated using silicon photonic devices in an FPGAcontrolled network testbed.