Proceedings. 31st Annual International Symposium on Computer Architecture, 2004.
DOI: 10.1109/isca.2004.1310776
|View full text |Cite
|
Sign up to set email alerts
|

Adaptive cache compression for high-performance processors

Abstract: Abstract

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
120
0

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 135 publications
(120 citation statements)
references
References 32 publications
0
120
0
Order By: Relevance
“…As the benefit of transparent compression depends on the specific data, it is not a universal approach. Additional studies have been done to reduce the performance gap between processor and memory calls by introducing cache compression [9,33]. A wide range of scientific applications and tools already involves compression mechanisms for computation or out-processing, like LAMMPS, HDF5 and MapReduce [40,120].…”
Section: Lossless Compressionmentioning
confidence: 99%
“…As the benefit of transparent compression depends on the specific data, it is not a universal approach. Additional studies have been done to reduce the performance gap between processor and memory calls by introducing cache compression [9,33]. A wide range of scientific applications and tools already involves compression mechanisms for computation or out-processing, like LAMMPS, HDF5 and MapReduce [40,120].…”
Section: Lossless Compressionmentioning
confidence: 99%
“…At low resistance levels, the deviation of their corresponding M distribution is smaller. 2 In order to calculate SER, we used the analytical model presented in [51]. Our model applies to Tables 3a and 3b.…”
Section: Reliability Modelmentioning
confidence: 99%
“…We also evaluated our scheme and WT method with FPC compression. FPC uses frequent pattern compression [2] to capture the most frequent patterns and store them in fewer bits. A study in [26] showed that the FPC hardware overhead is negligible (0.7ns for compression, 1.2ns for decompression).…”
Section: Experimental Settingsmentioning
confidence: 99%
See 1 more Smart Citation
“…Several cache compression techniques have been proposed that exploit the inter-block data localities to compress a cache block. ZCA [17], and a technique proposed by Ekman and Stenstrom [18] compress the zero blocks, whereas Alameldeen and Wood compress cache blocks that have narrow values (a small value stored in large size data type, for example a value of 1 that needs only one bit is stored with a long int data type) [19]. Arelakis and Stenstrom propose a statistical compression technique called SC 2 [20] that uses Huffman encoding.…”
Section: Related Workmentioning
confidence: 99%