2007
DOI: 10.1007/s10589-007-9030-3
|View full text |Cite
|
Sign up to set email alerts
|

Implementation of a primal–dual method for SDP on a shared memory parallel architecture

Abstract: Primal-dual interior point methods and the HKM method in particular have been implemented in a number of software packages for semidefinite programming. These methods have performed well in practice on small to medium sized SDP's. However, primal-dual codes have had some trouble in solving larger problems because of the storage requirements and required computational effort. In this paper we describe a parallel implementation of the primal-dual method on a shared memory system. Computational results are presen… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
55
0

Year Published

2010
2010
2013
2013

Publication Types

Select...
2
2
2

Relationship

0
6

Authors

Journals

citations
Cited by 49 publications
(55 citation statements)
references
References 17 publications
0
55
0
Order By: Relevance
“…We do this by using a Gaussian kernel W D = exp(− 1 2σ 2 D 2 ) and set the width parameter σ =D/ √ 2, which gives good results in practice. Another almost equally good choice is σ = 1 3 , which can be justified by the fact that the Google distance is scale invariant and mostly in [0,1]. Table 1 shows the number of clustering errors, i.e., the number of data points that are sorted to a different group than the intended one, respectively, on the data sets just described.…”
Section: Experimental Results With the Google Distancementioning
confidence: 99%
See 2 more Smart Citations
“…We do this by using a Gaussian kernel W D = exp(− 1 2σ 2 D 2 ) and set the width parameter σ =D/ √ 2, which gives good results in practice. Another almost equally good choice is σ = 1 3 , which can be justified by the fact that the Google distance is scale invariant and mostly in [0,1]. Table 1 shows the number of clustering errors, i.e., the number of data points that are sorted to a different group than the intended one, respectively, on the data sets just described.…”
Section: Experimental Results With the Google Distancementioning
confidence: 99%
“…Efficient solvers are available for this kind of optimization problems, such as CSDP [1] or SeDuMi [10]. In order to implement the constraints Y ij ≥ − 1 k−1 with these solvers, positive slack variables Z ij have to be introduced together with the equality constraints Y ij − Z ij = − 1 k−1 .…”
Section: Max-k-cutmentioning
confidence: 99%
See 1 more Smart Citation
“…License: open source Developer: Brian Borchers Capabilities: LP, SDP Algorithm: Interior-point method Special features: Callable library, Interface to R Website: http://projects.coin-or.org/Csdp Reference: [4,20] Parallel version in both shared [21] and distributed [22] memory configurations are available. CSDP is one of the most advanced solvers, it exploits sparsity in the data matrices A (i) .…”
Section: Csdpmentioning
confidence: 99%
“…As a result, parallel versions of several SDP software packages have been developed like PDSDP [1], SDPARA [27] and a parallel version of CSDP [5]. The first two are designed for PC clusters using MPI 1 and ScaLAPACK 2 and the last one is designed for a shared memory computer architecture [5].…”
Section: Introductionmentioning
confidence: 99%