2022
DOI: 10.1109/tcc.2020.3018089
|View full text |Cite
|
Sign up to set email alerts
|

A Remote Memory Sharing System for Virtualized Computing Infrastructures

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
1

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(3 citation statements)
references
References 34 publications
0
3
0
Order By: Relevance
“…In order to reduce the processing time of phase unwrapping, shared memory technology is used to improve the computational efficiency [44,45]. Prior to any calculations in phase unwrapping, the data are fetched from the embedded GPU memory and entered into the shared memory within the confines of the CUDA kernel.…”
Section: Central Processing Unit (Cpu) and Gpu Processingmentioning
confidence: 99%
“…In order to reduce the processing time of phase unwrapping, shared memory technology is used to improve the computational efficiency [44,45]. Prior to any calculations in phase unwrapping, the data are fetched from the embedded GPU memory and entered into the shared memory within the confines of the CUDA kernel.…”
Section: Central Processing Unit (Cpu) and Gpu Processingmentioning
confidence: 99%
“…Premises for the design and implementation of the cloud platform following modern requirements are our offered and approved solutions in building high-performance computing infrastructures [11][12], AI-based big data gathering, classification, and processing [13][14], optimizing cloud computing environments [15], optimizing energy consumption in electronic infrastructures [16], efficiently using HPC resources in linear arithmetic calculations [17] and disposing of cloud services [18].…”
Section: Introductionmentioning
confidence: 99%
“…Before processing data, the framework decompresses the data if the input file is compressed in HDFS. Our recent studies show [7,8,9] that the average memory usage for selected scientific workflows is 13-17% for Hadoop and 20-40% for Spark jobs, which neglect the full utilization of the RAM of HDFS nodes. Therefore, the usage of RAM-free space may boost the performance of HDFS processing.…”
Section: Introductionmentioning
confidence: 99%