Proceedings of the 49th Annual International Symposium on Computer Architecture 2022
DOI: 10.1145/3470496.3527432
|View full text |Cite
|
Sign up to set email alerts
|

MeNDA

Abstract: Near-memory processing has been extensively studied to optimize memory intensive workloads. However, none of the proposed designs address sparse matrix transposition, an important building block in sparse linear algebra applications. Prior work shows that sparse matrix transposition does not scale as well as other sparse primitives such as sparse matrix vector multiplication (SpMV) and hence has become a growing bottleneck in common applications. Sparse matrix transposition is highly memory intensive but low i… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 11 publications
(2 citation statements)
references
References 59 publications
0
2
0
Order By: Relevance
“…Reducing the distance between memory and processing components. [40,50] Compute-in-Memory MAC computation in the analog domain, computed in the memory array. Thereby removing the need for movements of weights.…”
Section: Near-memory Processingmentioning
confidence: 99%
“…Reducing the distance between memory and processing components. [40,50] Compute-in-Memory MAC computation in the analog domain, computed in the memory array. Thereby removing the need for movements of weights.…”
Section: Near-memory Processingmentioning
confidence: 99%
“…The progress of parallel and distributed file systems has provided larger bandwidth that requires new processing models. As explained in Section II-C, creating and analyzing graphs deals with graph algorithms such as graph transposition [54], [55], symmetrization, and sorting [56], [26] that requires further investigations.…”
Section: Impacts Of Creating Datasets On Progressing Researchmentioning
confidence: 99%