2016
DOI: 10.1137/15m1043133
|View full text |Cite
|
Sign up to set email alerts
|

A Convex Matrix Optimization for the Additive Constant Problem in Multidimensional Scaling with Application to Locally Linear Embedding

Abstract: Abstract. The additive constant problem has a long history in multi-dimensional scaling and it has recently been used to resolve the issue of indefiniteness of the geodesic distance matrix in ISOMAP. But it would lead to a large positive constant being added to all eigenvalues of the centered geodesic distance matrix, often causing significant distortion of the original distances. In this paper, we reformulate the problem as a convex optimization of almost negative semidefinite matrix so as to achieve minimal … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4

Citation Types

0
5
0

Year Published

2020
2020
2020
2020

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(5 citation statements)
references
References 37 publications
0
5
0
Order By: Relevance
“…However, the proposal of Cailliez (1983) is unsatisfactory because kernel-k-means applied to the original dissimilarity matrix and to the euclidized one yield different results (Section 6). The same applies to the formulas suggested by Szekely and Rizzo (2014) as well as Qi (2016) for the very same reason that is a modification of the distances by the same constant. Some researchers, e.g., Choi and Choi (2005) or Bao and Kadobayashi (2008), recommend explicit usage of the Cailliez euclidization.…”
Section: Introductionmentioning
confidence: 71%
See 3 more Smart Citations
“…However, the proposal of Cailliez (1983) is unsatisfactory because kernel-k-means applied to the original dissimilarity matrix and to the euclidized one yield different results (Section 6). The same applies to the formulas suggested by Szekely and Rizzo (2014) as well as Qi (2016) for the very same reason that is a modification of the distances by the same constant. Some researchers, e.g., Choi and Choi (2005) or Bao and Kadobayashi (2008), recommend explicit usage of the Cailliez euclidization.…”
Section: Introductionmentioning
confidence: 71%
“…1 Therefore, we contribute the following in this paper: (i) we show that a non-Euclidean distance matrix leads to wrong clustering by kernel-k-means (Section 3); (ii) we show that the distance matrix corrections proposed in (Gower and Legendre, 1986) (recalled in Section 4) do not repair the matrix to an Euclidean one (Sections 5 and 7); (iii) we show that the original theorems of Lingoes (1971) and Cailliez (1983), on which the method of Gower and Legendre (1986) was based, are correct (Sections 6 and 8); (iv) we show that the Cailliez (1983) correction is not suitable for the classical kernel-k-means as there are clustering discrepancies-it may be applied only for its variants rooted in 1 like the k-median algorithm (Bradley et al, 1996;Du et al, 2015;Kashima et al, 2008) (Section 6); and (v) we show under what assumptions the correction of Lingoes (1971) is suitable for usage with kernel-k-means as the clusterings before and after this transformation agree and hence the kernel trick can 1 Compare the original formulas derived by Cailliez (1983) and Lingoes (1971) and reproduced correctly later by Legendre and Legendre (1998) as well as Cox and Cox (2001). Note, however, that Legendre and Legendre (1998) refer on page 433 to the paper by Gower and Legendre (1986) notifying the reader that the form provided by Gower and Legendre (1986) is misprinted, which must have gone unnoticed by Szekely and Rizzo (2014) as well as (Qi, 2016). be validly applied, i.e., without the prior checking for embeddability of the distance matrix (Section 8).…”
Section: Introductionmentioning
confidence: 76%
See 2 more Smart Citations
“…As described in [7], the symmetric positive semidefinite cone, the nonsymmetric positive semidefinite cone and the conditional symmetric positive semidefinite cone are all special cases of K. Hence, Problem (1) can be found in a wide range of application fields, and you can refer the papers [1,14,17,4,6,15,12] for the more details. Moreover, there are several methods can be used to solve some special cases of Problem (1), such as the gradient projection method [5], the proximal pointtype method [2,7], the predictor-corrector algorithm [1], the interior-point method [16,6], the GSVD method [9] and the semi-proximal ADMM [8], the Newton-type method [13], the semismooth newton-CG method [11].…”
mentioning
confidence: 99%