Sixth International Conference on Parallel and Distributed Computing Applications and Technologies (PDCAT'05) 2005
DOI: 10.1109/pdcat.2005.223
|View full text |Cite
|
Sign up to set email alerts
|

Solving Very Large Traveling Salesman Problems by SOM Parallelization on Cluster Architectures

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
6
0

Year Published

2008
2008
2023
2023

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 19 publications
(6 citation statements)
references
References 3 publications
0
6
0
Order By: Relevance
“…This work also includes a brief overview of other partitioning methods used in combinatorial optimization (e.g. geometric, nearest neighbour, clustering) and also proposes a decomposition based on Kohonen's self organizing maps (SOMs) to solve the same problem using neural networks (a similar method has been used in Schabauer et al 2005 to solve the TSP). Here the similarities with our approach are more relevant, since once the problem has been partitioned into subproblems (represented by clusters of the graph representing it) these are not solved independently in parallel but according to a given ordering.…”
Section: Decomposition Techniques In Combinatorial Optimizationmentioning
confidence: 99%
“…This work also includes a brief overview of other partitioning methods used in combinatorial optimization (e.g. geometric, nearest neighbour, clustering) and also proposes a decomposition based on Kohonen's self organizing maps (SOMs) to solve the same problem using neural networks (a similar method has been used in Schabauer et al 2005 to solve the TSP). Here the similarities with our approach are more relevant, since once the problem has been partitioned into subproblems (represented by clusters of the graph representing it) these are not solved independently in parallel but according to a given ordering.…”
Section: Decomposition Techniques In Combinatorial Optimizationmentioning
confidence: 99%
“…The primary advantage of this method is its transparency to architectural constraints; however, poor implementations may diminish speedups. The parallelism of neural networks using clusters and message-passing interface(MPI) also suffers from communication overheads and contention delays [15]. With the introduction of programmability, graphics processing units (GPU) have gained enough flexibility for use in non-graphic applications [16].…”
Section: Introductionmentioning
confidence: 99%
“…However, poor implementation can diminish the speedup achieved. Parallelism of neural networks using clusters and MPI [10] also suffers from communication overheads and contention delay. With the introduction of programmability, the Graphics Processing Units (GPU) has gained enough flexibility to find use in non-graphics applications [11].…”
Section: Introductionmentioning
confidence: 99%