2005
DOI: 10.1016/j.parco.2005.02.006
|View full text |Cite
|
Sign up to set email alerts
|

Large volume visualization of compressed time-dependent datasets on GPU clusters

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
22
0
1

Year Published

2009
2009
2022
2022

Publication Types

Select...
2
2
2

Relationship

0
6

Authors

Journals

citations
Cited by 24 publications
(23 citation statements)
references
References 12 publications
0
22
0
1
Order By: Relevance
“…Using multi-GPU systems, including GPU clusters, is gaining popularity in scientific computing [5,8,9,11,20,23]. In general, these works demonstrate that such platforms can be beneficial in terms of performance, power, and price.…”
Section: Related Workmentioning
confidence: 99%
“…Using multi-GPU systems, including GPU clusters, is gaining popularity in scientific computing [5,8,9,11,20,23]. In general, these works demonstrate that such platforms can be beneficial in terms of performance, power, and price.…”
Section: Related Workmentioning
confidence: 99%
“…If the data amount exceeds the memory of the CPU or GPU, several techniques can be employed, including compressed or packed representations of the data [29], decomposition techniques, multi-resolution schemes [20,53,54], or out-of-core techniques [18]. Recent research combined bricking and decomposition with a hierarchical data structure.…”
Section: Software Techniques Coping With Large Datamentioning
confidence: 99%
“…Parallel GPU-based programming on a single node with one GPU or multiple GPUs using programming languages for the massive parallel cores on the graphic card [53][54][55]62]. With advances in GPU architecture, several algorithms have reached higher efficiency by transferring the program from CPU to GPU.…”
Section: Software Techniques Coping With Large Datamentioning
confidence: 99%
“…Parallel visualization is also very often used to address the issues of large data processing [2,4,15,22]. The overwhelming majority of such cases employ a divideand-conquer strategy that involves the subdivision of the large volume dataset into parts small enough to be processed on a single processor, successive, or simultaneous processing (e.g., visualization) of these parts, and the merging of outcomes to produce the final result.…”
Section: Introductionmentioning
confidence: 99%
“…There are also approaches that use wavelet compression to reduce the size of the volume data, and thus make the retention of the entire dataset in memory possible or speed up the transmission of data over the network in parallel visualization [9,22].…”
Section: Introductionmentioning
confidence: 99%