Proceedings of the 20th Annual International Conference on Supercomputing 2006
DOI: 10.1145/1183401.1183444
|View full text |Cite
|
Sign up to set email alerts
|

Accelerating sparse matrix computations via data compression

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
69
0
5

Year Published

2009
2009
2024
2024

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 115 publications
(74 citation statements)
references
References 19 publications
0
69
0
5
Order By: Relevance
“…This is the first advantage our implementation conferred on the Cell. Note that our technique is less general but simpler than a recent index compression approach [23].…”
Section: Index Size Selectionmentioning
confidence: 99%
“…This is the first advantage our implementation conferred on the Cell. Note that our technique is less general but simpler than a recent index compression approach [23].…”
Section: Index Size Selectionmentioning
confidence: 99%
“…These patterns include blocks [13], variable or mixtures of differently-sized blocks [12] diagonals, which may be especially wellsuited to machines with SIMD and vector units [32,28], general pattern compression [33], value compression [15], and combinations.…”
Section: Related Workmentioning
confidence: 99%
“…Many approaches have been suggested to reduce the memory bandwidth requirements in SpMV: row/column reordering [38,37], register blocking [41], compressing row or column indices [45] , cache blocking [25,46], symmetry [39], using single or mixed precision [16], and reorganizing the SpMV ordering across multiple iterations in a solver [35], among others. Some of these approaches are hard to parallelize.…”
Section: Introductionmentioning
confidence: 99%
“…The indices are normally represented as integers, but there are various ways to reduce their size. Willcok and Lumsdaine [45] apply graph compression techniques to reduce the size, showing speedups of up to 33% (although much more modest numbers on average). Williams et al point out that by using cache blocking, it is possible to reduce the number of bits for the column indices since the number of columns in the block is typically small [46].…”
Section: Introductionmentioning
confidence: 99%