The interaction between cores and memory blocks, in multiprocessor chips and smart systems, has always been a concern as it affects network latency, memory capacity, and power consumption. A new 2.5-dimensional architecture has been introduced in which the communication between the processing elements and the memory blocks is provided through a layer called the interposer. If the core wants to connect to another, it uses the top layer, and if it wants to interact with the memory blocks, it uses the interposer layer. In a case that coherence traffic at the processing layer increases to the extent that congestion occurs, a part of this traffic may be transferred to the interposer network under a mechanism called load balancing. When coherence traffic is moved to the interposer layer, as an alternative way, this may interfere with memory traffic. This paper introduces a mechanism in which the aforementioned interference may be avoided by defining two different virtual channels and using multiple links which specifically determines which memory block is going to be accessed. Our method is based on the destination address to recognize which channel and link should be selected while using the interposer layer. The simulation results show that the proposed mechanism has improved by 32% and 14% latency compared to the traditional load-balancing and unbalanced mechanisms, respectively.