2009
DOI: 10.1007/978-3-642-04244-7_20
|View full text |Cite
|
Sign up to set email alerts
|

Confidence-Based Work Stealing in Parallel Constraint Programming

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

1
45
0

Year Published

2012
2012
2022
2022

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 47 publications
(46 citation statements)
references
References 10 publications
1
45
0
Order By: Relevance
“…Research on parallelization for CSP solvers includes a broad spectrum of parallel programming models (e.g., OpenMP [5][6][7], Message Passing Interface (MPI ) [8,9]) and a variety of platforms (e.g., single node [5,7], cluster [9,10], and grid [11]), on a scale from a few processors to thousands. In particular, MPI is intended for high-performance parallel computing on platforms without shared memory.…”
Section: Background and Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…Research on parallelization for CSP solvers includes a broad spectrum of parallel programming models (e.g., OpenMP [5][6][7], Message Passing Interface (MPI ) [8,9]) and a variety of platforms (e.g., single node [5,7], cluster [9,10], and grid [11]), on a scale from a few processors to thousands. In particular, MPI is intended for high-performance parallel computing on platforms without shared memory.…”
Section: Background and Related Workmentioning
confidence: 99%
“…A boolean flag δ i indicates whether a node is closed (both values attempted, δ i = 0, black circle) or open (one value attempted, δ i = 1, white circle) [8]. Although identification of a helpful guiding path is non-trivial (as in [5]), variables with particular properties have proved effective for splitting [7]. Iterative partitioning with clause learning, where search spaces of SAT subproblems may overlap, can also be an effective strategy [11].…”
Section: Background and Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…A different approach is based on work stealing [2,3,4]. The workload is initially distributed to the available workers.…”
Section: Introductionmentioning
confidence: 99%
“…In our experimentation, the winner of the parallel category of the 2013 SAT Competition also achieved a speedup of only about 3 on 32 cores. Constraint programming search appears to be more suitable for parallelization than search for MIP or SAT: different strategies, including a recursive application of search goals [24], work stealing [14], problem decomposition [25], and a dedicated parallel scheme based on limited discrepancy search [23] all exhibit good speedups (sometimes near-linear) of the CP search in certain settings, especially those involving infeasible instances or scenarios where evaluating search tree leaves is costlier than evaluating internal nodes. Yet, recent developments in CP have moved towards more constraint learning during search, for which efficient parallelization becomes increasingly more difficult.…”
Section: Introductionmentioning
confidence: 99%