2020
DOI: 10.1002/cpe.6017
|View full text |Cite
|
Sign up to set email alerts
|

Transparent many‐core partitioning for high‐performance big data I/O

Abstract: Summary As the number of cores equipped in a single computing node is rapidly increasing, utilizing many cores for contemporary applications in an efficient manner is a challenging issue. We need to consider both parallelization and locality to fully exploit many cores for multifarious operations of emerging applications. In particular, big data applications perform computation and I/O intensive operations alternately. For instance, Apache Hadoop MapReduce assumes local persistent storage for each computing no… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
0
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(1 citation statement)
references
References 22 publications
0
0
0
Order By: Relevance
“…In [66], Xu et al defined a scheme for optimizing performance by placing data with high I/O cost in fast SSD storage. In [67], Lee et al improved the locality of network and storage I/O operations on many-core systems running Big Data applications using Apache Hadoop MapReduce. In [68], Lu et al discussed the importance of proper parameter settings in high-performance database systems for Big Data.…”
Section: Enabling Technologies For Big Datamentioning
confidence: 99%
“…In [66], Xu et al defined a scheme for optimizing performance by placing data with high I/O cost in fast SSD storage. In [67], Lee et al improved the locality of network and storage I/O operations on many-core systems running Big Data applications using Apache Hadoop MapReduce. In [68], Lu et al discussed the importance of proper parameter settings in high-performance database systems for Big Data.…”
Section: Enabling Technologies For Big Datamentioning
confidence: 99%