2009 Second International Conference on the Applications of Digital Information and Web Technologies 2009
DOI: 10.1109/icadiwt.2009.5273945
|View full text |Cite
|
Sign up to set email alerts
|

Perfomance of XSLT processors on large data sets

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2013
2013
2018
2018

Publication Types

Select...
2
2
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(2 citation statements)
references
References 10 publications
0
2
0
Order By: Relevance
“…To analyze the large amounts of XML data in science workflow, an approach for exploiting data parallelism in XML processing pipelines through novel compilation strategies within the MapReduce framework was presented in paper [11]. However, to the best of our knowledge, an efficient and scalable framework, which combines the MapReduce and XSLT technologies to implement large-scale XML data transformation, is still in blank [15].…”
Section: Introductionmentioning
confidence: 99%
“…To analyze the large amounts of XML data in science workflow, an approach for exploiting data parallelism in XML processing pipelines through novel compilation strategies within the MapReduce framework was presented in paper [11]. However, to the best of our knowledge, an efficient and scalable framework, which combines the MapReduce and XSLT technologies to implement large-scale XML data transformation, is still in blank [15].…”
Section: Introductionmentioning
confidence: 99%
“…First, in this approach an XSLT transformation processing is not load-balanced. Second, an XSLT transformation becomes inefficient if the size of the target XML document is large [20]. This implies that the centralized approach is inefficient even if the size of each XML fragment is small, whenever the merged document is large.…”
Section: Introductionmentioning
confidence: 99%