2016
DOI: 10.1109/tpds.2015.2496579
|View full text |Cite
|
Sign up to set email alerts
|

Fast Compression of Large Semantic Web Data Using X10

Abstract: The full-text may be used and/or reproduced, and given to third parties in any format or medium, without prior permission or charge, for personal research or study, educational, or not-for-prot purposes provided that:• a full bibliographic reference is made to the original source • a link is made to the metadata record in DRO • the full-text is not changed in any way The full-text must not be sold in any format or medium without the formal permission of the copyright holders.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2017
2017
2021
2021

Publication Types

Select...
3
2
2

Relationship

2
5

Authors

Journals

citations
Cited by 12 publications
(5 citation statements)
references
References 37 publications
0
5
0
Order By: Relevance
“…Theoretically, the first parallel approach will be more efficient than the second one, in terms of implementations using high performance computing (HPC) programming languages such as X10 [20], [21] and MPI [22]. This is because we can track the location of each subset of X and can manually assign them to a specified destination (i.e., sublog partition) according to our requirements.…”
Section: Candidate Pruningmentioning
confidence: 99%
“…Theoretically, the first parallel approach will be more efficient than the second one, in terms of implementations using high performance computing (HPC) programming languages such as X10 [20], [21] and MPI [22]. This is because we can track the location of each subset of X and can manually assign them to a specified destination (i.e., sublog partition) according to our requirements.…”
Section: Candidate Pruningmentioning
confidence: 99%
“…In recent year, a few scholars have begun to study parallel compression algorithms to speedup the running the operation [33][34][35][36]. Cheng [33] proposes an efficient algorithm for fast encoding large Semantic Web data, and present the detailed implementation of proposed approach based on the state-of-art asynchronous partitioned global address space (APGAS) parallel programming model. Urbani [34] researches and proposes a MapReduce algorithm that efficiently compresses and decompresses a large amount of Semantic Web data.…”
Section: Related Workmentioning
confidence: 99%
“…Here, We compare POICAG and DOICAS with the other compression algorithms according to effectiveness. SQUISH [28], APGAS [33] and SWSM [34] are proposed in recent years, which have been proved to be effective. From the graphs in Fig.…”
Section: Fig 12 Error Threshold Vs Compression Ratiomentioning
confidence: 99%
“…Data skew occurs naturally in big data applications [5], [15], and transfer skewed data will bring in heavy network traffic and result in load imbalncing. Therefore, it is very important for practical data systems to perform efficiently in such contexts [5].…”
Section: Skew Handlingmentioning
confidence: 99%