2017
DOI: 10.1145/3140659.3080233
|View full text |Cite
|
Sign up to set email alerts
|

The Mondrian Data Engine

Abstract: The increasing demand for extracting value out of ever-growing data poses an ongoing challenge to system designers, a task only made trickier by the end of Dennard scaling. As the performance density of traditional CPU-centric architectures stagnates, advancing compute capabilities necessitates novel architectural approaches. Near-memory processing (NMP) architectures are reemerging as promising candidates to improve computing efficiency through tight coupling of logic and memory. NMP architectures are especia… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

2
67
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
4
4
1

Relationship

0
9

Authors

Journals

citations
Cited by 47 publications
(69 citation statements)
references
References 45 publications
2
67
0
Order By: Relevance
“…Near Memory Processing (NMP) is the practice of placing compute logic near memory, generally DRAM, in an effort to decrease access time [4]. Many architectures allocate compute blocks on the logic layer of HMC DRAM [12], [13], [14], [15]. Other works couple GPU architectures with 3D stacked memories [16], [17].…”
Section: Near Memory Processing (Nmp)mentioning
confidence: 99%
See 1 more Smart Citation
“…Near Memory Processing (NMP) is the practice of placing compute logic near memory, generally DRAM, in an effort to decrease access time [4]. Many architectures allocate compute blocks on the logic layer of HMC DRAM [12], [13], [14], [15]. Other works couple GPU architectures with 3D stacked memories [16], [17].…”
Section: Near Memory Processing (Nmp)mentioning
confidence: 99%
“…Second, from an application viewpoint, few works provide potential solutions for optimizing algorithms to utilize NMP units while accounting for data locality [17], [21]. Finally, the HMC logic layer area/power budget is very constrained, thus limiting NMP logic complexity [13].…”
Section: Near Memory Processing (Nmp)mentioning
confidence: 99%
“…Recent work optimized quick sort [11], hash joins [14], scientific workloads [40,50], and machine learning [70] for KNL's HBM, but not streaming analytics. Beyond KNL, Mondrian [18] uses hardware support for analytics on high memory bandwidth in near-memory processing. Together, these results highlight the significance of sequential access and vectorized algorithms, affirming StreamBox-HBM's design.…”
Section: Related Workmentioning
confidence: 99%
“…NDP cores enjoy lower latency and energy to the memory stacked above them, but have limited area and power budgets [26,62]. These factors naturally bias NDP systems not only towards efficient cores [22,25], but also towards shallow hierarchies with few cache levels between cores and memories.…”
Section: Introductionmentioning
confidence: 99%