Recently, multi-level cache became more popular due to its better performance than single level cache. It is very useful in distributed and parallel systems where number of applications is running at a time. The data required by application program is easily available in the cache due to larger size of the cache. Until now, many multi-level cache management policies LRU-K [15], PROMOTE [1], DEMOTE [5] has been developed but still there is performance issue. The main difficulty of these policies is selecting a victim. In this paper, a new policy is suggested which uses compressed caching and selects a victim depending upon three factors. First is how many times the promotion or the demotion of the cache block has done, second is size of the cache block to be replaced, and the third is recency of the block in the cache memory[1]. This policy is expected to exhibit better hit ratio than all the previously existing multi-level cache management policies.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.