2018
DOI: 10.1007/978-3-030-00563-4_50
|View full text |Cite
|
Sign up to set email alerts
|

Hadoop Massive Small File Merging Technology Based on Visiting Hot-Spot and Associated File Optimization

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
2
2
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(1 citation statement)
references
References 7 publications
0
1
0
Order By: Relevance
“…Some suggested solutions include altering HDFS by adding hardware to speed up small file processing or letting HDFS automatically combine small files before storage. Peng et al proposed the Small Hadoop Distributed File System (SHDFS) [28], which is based on the original HDFS but includes a merging and caching module. The merging module employs a correlated files model to identify and merge correlated files using user-based collaborative filtering.…”
Section: Combining Small Files Into Large Filesmentioning
confidence: 99%
“…Some suggested solutions include altering HDFS by adding hardware to speed up small file processing or letting HDFS automatically combine small files before storage. Peng et al proposed the Small Hadoop Distributed File System (SHDFS) [28], which is based on the original HDFS but includes a merging and caching module. The merging module employs a correlated files model to identify and merge correlated files using user-based collaborative filtering.…”
Section: Combining Small Files Into Large Filesmentioning
confidence: 99%