The CMS experiment requires vast amounts of computational capacity in order to generate, process and analyze the data coming from proton-proton collisions at the Large Hadron Collider, as well as Monte Carlo simulations. CMS computing needs have been mostly satisfied up to now by the supporting Worldwide LHC Computing Grid (WLCG), a joint collaboration of more than a hundred computing centers geographically distributed around the world. However, as CMS faces the Run 3 and High Luminosity LHC (HL-LHC) challenges, with increasing luminosity and event complexity, growing demands for CPU have been estimated. In these future scenarios, additional contributions from more diverse types of resources, such as Cloud and High-Performance Computing (HPC) clusters, will be required to complement the limited growth of the capacities of WLCG resources. A number of strategies are being evaluated on how to access and use WLCG and non-WLCG processing capacities as part of a combined infrastructure, successfully exploit an increasingly more heterogeneous pool of resources, efficiently schedule computing workloads according to their requirements and priorities, and timely deliver analysis results to the collaboration, which are described in this paper.