32nd International Symposium on Computer Architecture (ISCA'05) 2005
DOI: 10.1109/isca.2005.6
|View full text |Cite
|
Sign up to set email alerts
|

A robust main-memory compression scheme

Abstract: Abstract

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

3
128
0
1

Year Published

2006
2006
2019
2019

Publication Types

Select...
5
2
2

Relationship

0
9

Authors

Journals

citations
Cited by 147 publications
(132 citation statements)
references
References 19 publications
3
128
0
1
Order By: Relevance
“…A node can also prefetch the input for its next Map or Reduce task while processing the current one, which is similar to the double-buffering schemes used in streaming models [23]. Bandwidth and cache space can be preserved using hardware compression of intermediate pairs which tend to have high redundancy [10].…”
Section: Runtime Systemmentioning
confidence: 99%
See 1 more Smart Citation
“…A node can also prefetch the input for its next Map or Reduce task while processing the current one, which is similar to the double-buffering schemes used in streaming models [23]. Bandwidth and cache space can be preserved using hardware compression of intermediate pairs which tend to have high redundancy [10].…”
Section: Runtime Systemmentioning
confidence: 99%
“…The runtime can also provide cache replacement hints for input and output pairs accessed in Map and Reduce tasks [25]. Finally, hardware compression/decompression of intermediate outputs as they are emitted in the Map stage or consumed in the Reduce stage can reduce bandwdith and storage requirements [10]. This section describes the experimental methodology we used to evaluate Phoenix.…”
Section: Concurrency and Locality Managementmentioning
confidence: 99%
“…The operating system running on the core uses part of this memory, while the user can use the rest. Intel provides a custom Linux kernel that during the boot process, allocates 5 (34)(35)(36)(37)(38) map configuration registers of cores. Entry 250 addresses the system interface; access to this memory is confined to the PCIe driver.…”
Section: Scc Address Spacesmentioning
confidence: 99%
“…Several proposals have been made suggesting memory compression to save bandwidth [2][15] [13]. We deem these approaches orthogonal to fine-grained fetch since compression of sub-blocks could further alleviate the bandwidth problem.…”
Section: Related Workmentioning
confidence: 99%