2010 ACM/IEEE International Conference for High Performance Computing, Networking, Storage and Analysis 2010
DOI: 10.1109/sc.2010.16
|View full text |Cite
|
Sign up to set email alerts
|

DASH: a Recipe for a Flash-based Data Intensive Supercomputer

Abstract: Abstract-Data intensive computing can be defined as computation involving large datasets and complicated I/O patterns. Data intensive computing is challenging because there is a five-orders-of-magnitude latency gap between main memory DRAM and spinning hard disks; the result is that an inordinate amount of time in data intensive computing is spent accessing data on disk. To address this problem we designed and built a prototype data intensive supercomputer named DASH that exploits flash-based Solid State Drive… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
12
0

Year Published

2010
2010
2020
2020

Publication Types

Select...
4
3
2

Relationship

1
8

Authors

Journals

citations
Cited by 37 publications
(12 citation statements)
references
References 12 publications
0
12
0
Order By: Relevance
“…Our measurements and other recent work [13] shows that software RAID provides better performance for SSD-based arrays than hardware controllers, because the processors on hardware RAID controllers become a bottleneck. Therefore, we use software RAID for this array.…”
Section: Raid-ssdmentioning
confidence: 55%
“…Our measurements and other recent work [13] shows that software RAID provides better performance for SSD-based arrays than hardware controllers, because the processors on hardware RAID controllers become a bottleneck. Therefore, we use software RAID for this array.…”
Section: Raid-ssdmentioning
confidence: 55%
“…• Dash [7], a cluster targeted for data-intensive computing at SDSC, consisting of 32 nodes each with dual quad-core 2.4 GHz Intel Nehalem processors and 48 GB of RAM per node. Half of Dash is configured as a traditional 16-node cluster but with a single 64 GB flash drive in each node.…”
Section: Resultsmentioning
confidence: 99%
“…Moving to the field, several research laboratories are making efforts to improve the performance of their supercomputers with the SSD-based storage. For instance, San Diego Supercomputing Center deployed SSDs in their supercomputer, Gordon [12,34,42], to reduce the latency gap between the memory and disks, as has the Tokyo Institute of Technology for their supercomputer, TSUBAME [27].…”
Section: Related Workmentioning
confidence: 99%