Proceedings of the 50th Annual IEEE/ACM International Symposium on Microarchitecture 2017
DOI: 10.1145/3123939.3124553
|View full text |Cite
|
Sign up to set email alerts
|

Summarizer

Abstract: Modern data center solid state drives (SSDs) integrate multiple general-purpose embedded cores to manage ash translation layer, garbage collection, wear-leveling, and etc., to improve the performance and the reliability of SSDs. As the performance of these cores steadily improves there are opportunities to repurpose these cores to perform application driven computations on stored data, with the aim of reducing the communication between the host processor and the SSD. Reducing host-SSD bandwidth demand cuts dow… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
10
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 82 publications
(10 citation statements)
references
References 30 publications
0
10
0
Order By: Relevance
“…Second, it reduces the overall performance and energy burden of the application from the rest of the system, freeing them up to do other useful work meanwhile. Third, as shown by many prior works (e.g., [88][89][90][91]), ISP can benefit from the SSD's larger internal bandwidth. For example, with 8 ( 16) channels for SSD-C (SSD-P) and the maximum per-channel bandwidth of 1.2 GB/s, the maximum internal bandwidth is calculated to be 9.6 GB/s (19.2 GB/s).…”
Section: Our Goalmentioning
confidence: 95%
See 1 more Smart Citation
“…Second, it reduces the overall performance and energy burden of the application from the rest of the system, freeing them up to do other useful work meanwhile. Third, as shown by many prior works (e.g., [88][89][90][91]), ISP can benefit from the SSD's larger internal bandwidth. For example, with 8 ( 16) channels for SSD-C (SSD-P) and the maximum per-channel bandwidth of 1.2 GB/s, the maximum internal bandwidth is calculated to be 9.6 GB/s (19.2 GB/s).…”
Section: Our Goalmentioning
confidence: 95%
“…In-Storage Processing. Several works propose ISP designs as accelerators for different applications [88, (e.g., in machine learning [147,150,151], pattern processing and read mapping [91,149], and graph analytics [141]), generalpurpose [145,146,[152][153][154][155][156][157][158][159][160][161][162][163], bulk-bitwise operations using flash memory [164,165], in close integration with FPGAs [90,[166][167][168][169][170], or GPUs [171]. None of these works perform metagenomic analysis nor address the challenges of ISP for metagenomics.…”
Section: Related Workmentioning
confidence: 99%
“…Works that show the general applicability of NDP exist, prominent examples include INSIDER [30], Biscuit [19], Summarizer [22], iSSD [13], and Willow [32]. These works demonstrated that it is possible to reap the benefits of NDP in SSDs by leveraging tailor-made, user-space programming framework.…”
Section: Background and Related Workmentioning
confidence: 99%
“…Although near data processing (NDP) in-storage devices has been proposed in the context of hard disks drives (HDDs) [7] and databases [20] a long time ago, it become technologically and economically feasible only recently with the introduction of solid state drives (SSDs) [11,16,17,19,22,28,30,32,37]. SSDs store data on NAND flash memory media, which have no mechanical parts compared to traditional rotational HDDs.…”
Section: Introductionmentioning
confidence: 99%
“…The advent of the information era has led to the explosive growth of data, and thus poses a tremendous challenge to the conventional computing paradigm based on von Neumann architecture [1][2][3][4][5]. The continual data shuttling between the memory and CPU dramatically hinder the improvement of speed and energy efficiency which is referred to as the von Neumann bottleneck [6][7][8][9].…”
Section: Introductionmentioning
confidence: 99%