2020
DOI: 10.1109/access.2020.2983432
|View full text |Cite
|
Sign up to set email alerts
|

Making Better Use of Processing-in-Memory Through Potential-Based Task Offloading

Abstract: There is an increasing demand for a novel computing structure for data-intensive applications such as artificial intelligence and virtual reality. The processing-in-memory (PIM) is a promising alternative to reduce the overhead caused by data movement. Many studies have been conducted on the utilization of the PIM taking advantage of the bandwidth increased by the through silicon via (TSV). One approach is to design an optimized PIM architecture for a specific application, the other is to find the tasks that w… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
23
0

Year Published

2020
2020
2021
2021

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(23 citation statements)
references
References 30 publications
0
23
0
Order By: Relevance
“…For DCC architectures, solutions can be divided into two main categories: (1) PIM systems, which perform computations using special circuitry inside the memory module or by taking advantage of particular aspects of the memory itself, e.g., simultaneous activation of multiple DRAM rows for logical operations [1,[11][12][13][14][15][16][17][18][19][20][21][22][23][24][25]; (2) NMP systems, which perform computations on a PE placed close to the memory module, e.g., CPU or GPU cores placed on the logic layer of 3D-stacked memory [26][27][28][29][30][31][32][33][34][35][36][37][38][39][40][41][42]. For the purposes of this survey, we classify systems that use logic layers in 3D-stacked memories as NMP systems, as these logic layers are essentially computational cores that are near the memory stack (directly underneath it).…”
Section: Data-centric Computing Architecturesmentioning
confidence: 99%
See 4 more Smart Citations
“…For DCC architectures, solutions can be divided into two main categories: (1) PIM systems, which perform computations using special circuitry inside the memory module or by taking advantage of particular aspects of the memory itself, e.g., simultaneous activation of multiple DRAM rows for logical operations [1,[11][12][13][14][15][16][17][18][19][20][21][22][23][24][25]; (2) NMP systems, which perform computations on a PE placed close to the memory module, e.g., CPU or GPU cores placed on the logic layer of 3D-stacked memory [26][27][28][29][30][31][32][33][34][35][36][37][38][39][40][41][42]. For the purposes of this survey, we classify systems that use logic layers in 3D-stacked memories as NMP systems, as these logic layers are essentially computational cores that are near the memory stack (directly underneath it).…”
Section: Data-centric Computing Architecturesmentioning
confidence: 99%
“…Programmable logic: Programmable logic PEs can include general purpose processor cores such as CPUs [31,40,[70][71][72], GPUs [26,27,38,73,74], and accelerated processing units (APU) [75] that can execute complex workloads. These cores are usually trimmed down (fewer computation units, less complex cache hierarchies without L2/L3 caches, or lower operating frequencies) from their conventional counterparts due to power, area, and thermal constraints.…”
Section: J Low Power Electron Appl 2020 10 X For Peer Review 7 Omentioning
confidence: 99%
See 3 more Smart Citations