2022
DOI: 10.48550/arxiv.2201.05072
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

SparseP: Towards Efficient Sparse Matrix Vector Multiplication on Real Processing-In-Memory Systems

Abstract: Several manufacturers have already started to commercialize near-bank Processing-In-Memory (PIM) architectures, after decades of research efforts. Near-bank PIM architectures place simple cores close to DRAM banks. Recent research demonstrates that they can yield significant performance and energy improvements in parallel applications by alleviating data access costs. Real PIM systems can provide high levels of parallelism, large aggregate memory bandwidth and low memory access latency, thereby being a good fi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(6 citation statements)
references
References 123 publications
(176 reference statements)
0
6
0
Order By: Relevance
“…CSR instead lists the number of non-zero elements in each row and their column position. We adopt a COO scheme in our work, as has been shown in [9] to lead to greater efficiency in distributed computing. Moreover, it results in a simpler hardware implementation for controlling the execution of SpMV multiplication.…”
Section: Add Multmentioning
confidence: 99%
See 3 more Smart Citations
“…CSR instead lists the number of non-zero elements in each row and their column position. We adopt a COO scheme in our work, as has been shown in [9] to lead to greater efficiency in distributed computing. Moreover, it results in a simpler hardware implementation for controlling the execution of SpMV multiplication.…”
Section: Add Multmentioning
confidence: 99%
“…Most works in SpMV multiplication in NMC consider highperformance computing solutions. The authors of [9] and [6] propose to integrate SpMV computing units into DRAM banks on a 3D integration using Through Silicon Vias (TSV).…”
Section: B Near-memory Computingmentioning
confidence: 99%
See 2 more Smart Citations
“…Spare linear algebra: A growing number of hardware solutions are being designed for sparse linear algebra, like Sparse-TPU [25], SpArch [58], SparseP [19], etc. Some specifically target sparsity in deep learning algebra, e.g, SNAP [57], Sticker [56], [12].…”
Section: Fpgamentioning
confidence: 99%