2015 IEEE 27th International Conference on Tools With Artificial Intelligence (ICTAI) 2015
DOI: 10.1109/ictai.2015.75
|View full text |Cite
|
Sign up to set email alerts
|

A Comparison of Decomposition Methods for the Maximum Common Subgraph Problem

Abstract: The maximum common subgraph problem is an N P-hard problem which is very difficult to solve with exact approaches. To speed up the solution process, we may decompose it into independent subproblems which are solved in parallel. We describe a new decomposition method which exploits the structure of the problem to decompose it. We compare this structural decomposition with domain-based decompositions, which basically split variable domains. Experimental results show us that the structural decomposition leads to … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2016
2016
2024
2024

Publication Types

Select...
2
1
1
1

Relationship

2
3

Authors

Journals

citations
Cited by 5 publications
(7 citation statements)
references
References 14 publications
0
7
0
Order By: Relevance
“…Maximum clique algorithms have been extended for thread-parallel search [10,28,44], and in particular, work stealing strategies designed to eliminate exceptionally hard instances by forcing diversity at the top of search [30] could be beneficial in eliminating some of the rare cases where the clique algorithm is many orders of magnitude worse than the CP models. On the CP side, the focus for parallelism has been on decomposition [33], rather than fully dynamic work stealing-it would be interesting to compare these approaches.…”
Section: Resultsmentioning
confidence: 99%
“…Maximum clique algorithms have been extended for thread-parallel search [10,28,44], and in particular, work stealing strategies designed to eliminate exceptionally hard instances by forcing diversity at the top of search [30] could be beneficial in eliminating some of the rare cases where the clique algorithm is many orders of magnitude worse than the CP models. On the CP side, the focus for parallelism has been on decomposition [33], rather than fully dynamic work stealing-it would be interesting to compare these approaches.…”
Section: Resultsmentioning
confidence: 99%
“…MCS problems are the target of Minot et al [33], who describe a structural decomposition method for constraint programming. Their decomposition generates independent subproblems, which can then be solved in parallel.…”
Section: Parallel Approachesmentioning
confidence: 99%
“…In our pseudo-code this operation is performed with the queue extraction of line 29, but in reality this operation is performed using the CUDA embedded variables representing the block and thread indices. Then, it proceeds through the main cycle of lines [30][31][32][33][34][35][36][37][38], where it follows a pattern similar to the one of function GPUPARALLELMCS. At this level, thread synchronization within the threads belonging to the same block is guaranteed by the SYNCHTHREADS function (mapped on the CUDA __syncthreads instruction).…”
Section: Algorithmmentioning
confidence: 99%
“…Unfortunately, many of the instances we consider behave more like decision problem instances than optimisation instances: due to the combination of a low solution density, good value-ordering heuristics, and a strong bound function in cases where the optimal solution is relatively large, it is often the case that the runtime is determined almost entirely by how long it takes to find an optimal solution, with the proof of optimality being nearly trivial. Indeed, attempts to parallelise the basic constraint programming approach by static decomposition have had limited success [29].…”
Section: Parallel Searchmentioning
confidence: 99%