A promising solution for assuring ultra-low latency in dataintensive application processing systems is processing in memory (PIM). Although most studies that have examined PIM-based computing systems have used cache memory, few have adequately explored a reasonable cache management policy for PIM. Therefore, this paper studies cache management policies for PIM-based computing systems and classifies existing PIM policies according to where they are located and how they are managed. To evaluate the policies, we model three types of PIM-based computing systems used in an in-memory system architecture. One model employs an internal-single cache, another an external cache hierarchy, and the other internal multiple cache-based PIM. We also simulate the performance and power consumption of the three models by their workloads, each with diverse characteristics. The experimental results show how cache policies influence the performance and power of PIM-based inmemory computing systems.