Proceedings of the 16th International Conference on Supercomputing - ICS '02 2002
DOI: 10.1145/514195.514197
|View full text |Cite
|
Sign up to set email alerts
|

The architecture of the DIVA processing-in-memory chip

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
27
0

Year Published

2005
2005
2023
2023

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 24 publications
(27 citation statements)
references
References 0 publications
0
27
0
Order By: Relevance
“…The DIVA architecture [9] was developed by Draper, Hall, and others at USC ISI to provide a multicore scalable PIM architecture for a wide array of general applications including scalable embedded applications. This PIM architecture incorporated a simple mechanism for message (parcel) driven computation and supported a network that permitted the interconnection of a number of such components to work together in parallel on the same application.…”
Section: Related Research In the Fieldmentioning
confidence: 99%
“…The DIVA architecture [9] was developed by Draper, Hall, and others at USC ISI to provide a multicore scalable PIM architecture for a wide array of general applications including scalable embedded applications. This PIM architecture incorporated a simple mechanism for message (parcel) driven computation and supported a network that permitted the interconnection of a number of such components to work together in parallel on the same application.…”
Section: Related Research In the Fieldmentioning
confidence: 99%
“…DIVA targets applications that are not aided by caches in conventional systems due to little spatial or temporal data locality and are thus severely impacted by the processor-memory bottleneck. Based on our first PIM implementation, a PIM system incorporating these devices is projected to achieve speedups ranging from 8.8 to 38.3 over conventional workstations for a number of applications [2]. Since DIVA PIM chips serve primarily as memory components, it is important to preserve a large majority of the die area for memory, so the processing logic for such PIM chips should be compacted as much as possible.…”
Section: Introductionmentioning
confidence: 99%
“…Processing-in-memory (PIM) [1] has been proposed as a solution to the memory wall problem. It yields dramatically increased memory bandwidth by the inherent nature of an embedded processor directly connected to a memory bank.…”
Section: Introductionmentioning
confidence: 99%
“…It yields dramatically increased memory bandwidth by the inherent nature of an embedded processor directly connected to a memory bank. Although processing-in-memory architectures like the DataIntensive Architecture (DIVA) [1] have significant memory latency advantages over conventional systems, as fabrication technologies advance, latency to on-chip embedded DRAM (eDRAM) is increasing. Conventional systems have employed data caches and load/store queues (LSQ) to combat increasing latency.…”
Section: Introductionmentioning
confidence: 99%