Power consumption is an increasingly impressing concern for data servers as it directly affects running costs and system reliability. Prior studies have shown that most memory space on data servers is used for buffer caching and thus cache replacement becomes critical. Two conflicting factors of buffer caching impacts memory energy efficiency: (1) a higher hit rate reduces memory traffic and thus saves energy; (2) temporally concentrating memory accesses to a smaller set of memory chips increases the chances of "free riding" through DMA overlapping and also makes more memory chips have opportunities to power down. This paper investigates the tradeoff between these two interacting, sometimes conflicting factors and proposes three energy-aware buffer cache replacement algorithms: On a cache miss for a new block b in a file f , evict an victim block from (1) the most recently accessed memory chip; (2) the memory chip that is accessed most recently by file f ; or (3) the memory chip that is accessed most recently by file f and whose last access block belongs to the same hot or cold categories as block b.
Simulation results based on three real-world I/O traces, including TPC-R, MSN-BEFS and Exchange, show that our algorithmscan save up to 24.9% energy with marginal degradation in hit rates. Our algorithms show degradation in response time in some experiments. We propose an off-line energy suboptimal replacement algorithm that serves as a theortical reference .