2013
DOI: 10.1016/j.jpdc.2013.07.022
|View full text |Cite
|
Sign up to set email alerts
|

Generating data transfers for distributed GPU parallel programs

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
7
0

Year Published

2015
2015
2020
2020

Publication Types

Select...
3
3
1

Relationship

0
7

Authors

Journals

citations
Cited by 9 publications
(7 citation statements)
references
References 21 publications
0
7
0
Order By: Relevance
“…As can be seen from Figure 9 and 10, when the number of counters remains unchanged (2 23 , 2 26 or 2 29 ) and g increases from 1024 to 4096, the number of hosts whose cardinality cannot be approximated by VATE is also decreases. When g = 4096, for Caida 2015 02 19, the cardinality of all hosts in the time window W(600,300) can be approximately estimated.…”
Section: Methodsmentioning
confidence: 94%
See 1 more Smart Citation
“…As can be seen from Figure 9 and 10, when the number of counters remains unchanged (2 23 , 2 26 or 2 29 ) and g increases from 1024 to 4096, the number of hosts whose cardinality cannot be approximated by VATE is also decreases. When g = 4096, for Caida 2015 02 19, the cardinality of all hosts in the time window W(600,300) can be approximately estimated.…”
Section: Methodsmentioning
confidence: 94%
“…A GPU chip contains hundreds to thousands of processing units, far more than that in the CPU. For tasks without data access conflicts and using the same instructions to process different data (single instruction multiple data streams, SIMD), GPU can achieve high speedup [28] [29].…”
Section: Deploy Vate On Gpumentioning
confidence: 99%
“…The numerical optimisation approach is universal, and it admits any distortion types or types of objects. It is promising that both the training for accumulating CEP statistics and the testing shall be accelerated by GPU computations and parallelization [33], [34], [45], [46]. Very likely, using CUDA [47], [48] is preferable to higher-level languages.…”
Section: Discussionmentioning
confidence: 99%
“…Graphic processing unit (GPU) is one of the most popular parallel computing platform in recent years. For these tasks that have no data accessing conflict and processing different data with the same instructions (SIMD), GPU can acquire a high speed up [2] [19]. Every packet will update SEAV and LDCA.…”
Section: Distributed Super Points Detection On Gpumentioning
confidence: 99%