2016
DOI: 10.1016/j.future.2015.10.009
|View full text |Cite
|
Sign up to set email alerts
|

Parallel Branch-and-Bound in multi-core multi-CPU multi-GPU heterogeneous environments

Abstract: We investigate the design of parallel B&B in large scale heterogeneous compute environments where processing units can be composed of a mixture of multiple shared memory cores, multiple distributed CPUs and multiple GPUs devices. We describe two approaches addressing the critical issue of how to map B&B workload with the different levels of parallelism exposed by the target compute platform. We also contribute a throughout large scale experimental study which allows us to derive a comprehensive and fair analys… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
29
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
6
3

Relationship

0
9

Authors

Journals

citations
Cited by 29 publications
(29 citation statements)
references
References 45 publications
0
29
0
Order By: Relevance
“…The PFSP has been frequently used as a test-case for parallel B&B algorithms, as the huge amount of generated nodes and the highly irregular structure of the search tree raise multiple challenges in terms of design and implementation on increasingly complex parallel architectures, e. g. grid computing (Mezmaz et al, 2007;Drozdowski et al, 2011;Bendjoudi et al, 2012), multicore CPUs (Mezmaz et al, 2014a;Gmys et al, 2016a), GPUs and many-core devices (Chakroun et al, 2013;Gmys et al, 2016b;Melab et al, 2018), clusters of GPUs (Vu and Derbel, 2016) or FPGAs (Daouri et al, 2015).…”
Section: Parallelismmentioning
confidence: 99%
“…The PFSP has been frequently used as a test-case for parallel B&B algorithms, as the huge amount of generated nodes and the highly irregular structure of the search tree raise multiple challenges in terms of design and implementation on increasingly complex parallel architectures, e. g. grid computing (Mezmaz et al, 2007;Drozdowski et al, 2011;Bendjoudi et al, 2012), multicore CPUs (Mezmaz et al, 2014a;Gmys et al, 2016a), GPUs and many-core devices (Chakroun et al, 2013;Gmys et al, 2016b;Melab et al, 2018), clusters of GPUs (Vu and Derbel, 2016) or FPGAs (Daouri et al, 2015).…”
Section: Parallelismmentioning
confidence: 99%
“…-Parallelization strategies can be combined to exploit complementary ways of parallelizations. For example, low-level and domain decomposition parallelism have been jointly applied to branch-and-X algorithms [Vu andDerbel, 2016, Adel et al, 2016] and to dynamic programming [Maleki et al, 2016], and low-level and multi-search parallelism to genetic algorithms [Abbasian andMouhoub, 2013, Munawar et al, 2009]. In total, we found eight studies which apply such combinations.…”
Section: Algorithmic Parallelization and Computational Parallelizationmentioning
confidence: 99%
“…Finally, it should be noticed that parallelization strategies are not mutually incompatible and may be combined into comprehensive algorithmic designs [Crainic et al, 2006, Crainic, 2019. For example, low-level and decomposition parallelism have been jointly applied to branch-and-bound [Adel et al, 2016] and dynamic programming [Vu and Derbel, 2016], [Maleki et al, 2016], and low-level parallelism and cooperative multi-search have been applied to a hybrid metaheuristic [Munawar et al, 2009] which uses a genetic algorithm and hill climbing.While the aforementioned parallelization strategies have been formulated for the class of metaheuristics, the strategydefining principles are of general nature of parallelizing optimization algorithms so that the scope of applicability of the parallelization strategies can be straightforward extended to other algorithm classes, including exact methods and (problem-specific) heuristics. For example, Gendron and Crainic [1994] have defined three types of parallelism for branch-and-bound: their type 1 parallelism refers to parallelism when performing operations on generated subproblems, such as executing the bounding operation in parallel for each subproblem.…”
mentioning
confidence: 99%
“…The main feature of multicore chip is that tremendous increase in performance by increasing the number of cores instead of increasing the frequency [3]. To improve the multicore CPU performance, three factors namely parallelism granularity, incorrect programming model and language compilers to be tuned [10]. Nowadays, most of the embedded applications are parallel processing application.…”
mentioning
confidence: 99%