2019
DOI: 10.1007/978-3-030-18764-4_12
|View full text |Cite
|
Sign up to set email alerts
|

Many-Core Branch-and-Bound for GPU Accelerators and MIC Coprocessors

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
3
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(3 citation statements)
references
References 14 publications
0
3
0
Order By: Relevance
“…Earlier GPUs were mainly used to accelerate graphics processing until NVIDIA introduced the general purpose parallel computing architecture called CUDA (Compute Unified Device Architecture) which expanded the scope of GPU applications and made GPU dominate in areas such as high performance computing, artificial intelligence computing, and parallel computing. For most parallel applications, GPUs are able to provide better performance than CPUs [1]. Modern GPU architectures are based on the Single Instruction Multiple Thread (SIMT) computing model, where 32 threads form a warp that executes the same instructions and processes different data at the same time.…”
Section: Introductionmentioning
confidence: 99%
“…Earlier GPUs were mainly used to accelerate graphics processing until NVIDIA introduced the general purpose parallel computing architecture called CUDA (Compute Unified Device Architecture) which expanded the scope of GPU applications and made GPU dominate in areas such as high performance computing, artificial intelligence computing, and parallel computing. For most parallel applications, GPUs are able to provide better performance than CPUs [1]. Modern GPU architectures are based on the Single Instruction Multiple Thread (SIMT) computing model, where 32 threads form a warp that executes the same instructions and processes different data at the same time.…”
Section: Introductionmentioning
confidence: 99%
“…In addition, many‐cores computing platforms, e.g., graphics processing units (GPUs) and many integrate cores (MICs), have been involved in the development of some parallel algorithms to speed up the processing times 2 ; such approaches are expected to become mainstream in the near future. Experimental results have shown that parallelized algorithms can significantly improve computation speed and provide better acceleration efficiency 3‐5 . Usually, there are different kinds of serial image‐enhancement algorithms that need to be parallelized in several high‐performance computing (HPC) platforms using various parallel programming models, for example, massage passing interface (MPI), OpenMP, compute unified device architecture (CUDA), OpenCL, and OpenACC, etc.…”
Section: Introductionmentioning
confidence: 99%
“…Experimental results have shown that parallelized algorithms can significantly improve computation speed and provide better acceleration efficiency. [3][4][5] Usually, there are different kinds of serial…”
mentioning
confidence: 99%