Recently, multi-level cache became more popular due to its better performance than single level cache. It is very useful in distributed and parallel systems where number of applications is running at a time. The data required by application program is easily available in the cache due to larger size of the cache. Until now, many multi-level cache management policies LRU-K [15], PROMOTE [1], DEMOTE [5] has been developed but still there is performance issue. The main difficulty of these policies is selecting a victim. In this paper, a new policy is suggested which uses compressed caching and selects a victim depending upon three factors. First is how many times the promotion or the demotion of the cache block has done, second is size of the cache block to be replaced, and the third is recency of the block in the cache memory[1]. This policy is expected to exhibit better hit ratio than all the previously existing multi-level cache management policies.