2018
DOI: 10.1007/978-3-319-93031-2_22
|View full text |Cite
|
Sign up to set email alerts
|

Observations from Parallelising Three Maximum Common (Connected) Subgraph Algorithms

Abstract: We discuss our experiences adapting three recent algorithms for maximum common (connected) subgraph problems to exploit multicore parallelism. These algorithms do not easily lend themselves to parallel search, as the search trees are extremely irregular, making balanced work distribution hard, and runtimes are very sensitive to value-ordering heuristic behaviour. Nonetheless, our results show that each algorithm can be parallelised successfully, with the threaded algorithms we create being clearly better than … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2019
2019
2020
2020

Publication Types

Select...
2
1

Relationship

2
1

Authors

Journals

citations
Cited by 3 publications
(9 citation statements)
references
References 32 publications
0
9
0
Order By: Relevance
“…Better insights are given by scatter plots that compare times on a per instance basis (as done in Fig. 5), or by the aggregate speed-up measure introduced in [12], which measures timeout ratio for solving a same number of instances. For instance, Sequential Glasgow solves 14, 356 instances within 1000s, and the hardest of these instances is solved in 939s.…”
Section: Combining Solvers To Take the Best Of Themmentioning
confidence: 99%
“…Better insights are given by scatter plots that compare times on a per instance basis (as done in Fig. 5), or by the aggregate speed-up measure introduced in [12], which measures timeout ratio for solving a same number of instances. For instance, Sequential Glasgow solves 14, 356 instances within 1000s, and the hardest of these instances is solved in 939s.…”
Section: Combining Solvers To Take the Best Of Themmentioning
confidence: 99%
“…vertical distance between two lines therefore shows how many more instances can be solved by one solver than another, if every instance is run separately with the chosen x timeout. The horizontal distance shows how many times longer the per-instance timeout would need to be to allow the rightmost algorithm to succeed on y out of the 14,621 instances (bearing in mind that the two sets of y instances could be different), and gives a measure called aggregate speedup [20].…”
Section: Improving Sequential Searchmentioning
confidence: 99%
“…Exploiting multiple cores to speed up constraint programming solvers remains an active area of research, with no universally perfect solution being available. Four of the more common approaches are based upon decompositions [26,34], work-stealing [39,6,35,20], parallel discrepancy searches [40,41], and algorithm portfolios [32]. Decomposition approaches are unsuitable for decision problems, or problems where we have good value-ordering heuristics, because the decomposition interferes strongly with the shape of the search tree [34].…”
Section: Parallel Searchmentioning
confidence: 99%
See 2 more Smart Citations