1956
DOI: 10.1090/s0002-9939-1956-0078686-7
|View full text |Cite
|
Sign up to set email alerts
|

On the shortest spanning subtree of a graph and the traveling salesman problem

Abstract: Relationships observed between variables are inextricably linked to the ways in which the variables are measured. In an earlier paper it was argued that the development of a rigorous and consistent system of measurement is a necessary, although not a sufficient, condition for the discovery of mathematical relationships in human geography [ 5 ] . This note will examine the measurements presently in use in the subject to substantiate the claim that the types of variables we investigate do not facilitate mathemat… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
1,178
0
28

Year Published

1999
1999
2019
2019

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 3,696 publications
(1,207 citation statements)
references
References 0 publications
1
1,178
0
28
Order By: Relevance
“…2.6 w.r.t. E becomes a minimum spanning tree problem, for which efficient solution methods exists, most notably Kruskal's algorithm (Kruskal, 1956) and Prim's algorithm (Prim, 1957 Pearson correlation coefficient matrices were derived from the raw, continuous iEEG data, discretized iEEG data and the distributions P ij (x i , x j ), (i, j) ∈ V 2 inferred from the CL-trees. Throughout this paper we will denote by sCC = (scc ij ) the ordinary (signed) Pearson correlation-coefficient matrix (but with zero main diagonal) and with CC = (|scc ij |) the corresponding absolute value Pearson correlation-coefficient matrix, from which functional networks were constructed.…”
Section: The Chow-liu Tree As a Simple Predictive Modelmentioning
confidence: 99%
“…2.6 w.r.t. E becomes a minimum spanning tree problem, for which efficient solution methods exists, most notably Kruskal's algorithm (Kruskal, 1956) and Prim's algorithm (Prim, 1957 Pearson correlation coefficient matrices were derived from the raw, continuous iEEG data, discretized iEEG data and the distributions P ij (x i , x j ), (i, j) ∈ V 2 inferred from the CL-trees. Throughout this paper we will denote by sCC = (scc ij ) the ordinary (signed) Pearson correlation-coefficient matrix (but with zero main diagonal) and with CC = (|scc ij |) the corresponding absolute value Pearson correlation-coefficient matrix, from which functional networks were constructed.…”
Section: The Chow-liu Tree As a Simple Predictive Modelmentioning
confidence: 99%
“…The filtration in that algorithm is made by first computing a minimum spanning tree of the graph using the well-known Kruskal's algorithm [23] and then rooting that tree on the graph center. This rooted tree allows to know the order in which vertices of the graph must be inserted in the layout.…”
Section: Force Directed Based Approachmentioning
confidence: 99%
“…However, for the simple relational example, i.e., the relation R ABC (with or without the tuple (a 4 , b 3 , c 2 )) we may simply choose the well-known Kruskal algorithm [Kruskal 1956], which determines an optimum weight spanning tree. As edge weights we may choose the relative number of possible value combinations (and determine a minimum weight spanning tree) or the Hartley information gain (and determine a maximum weight spanning tree).…”
Section: Learning Relational Networkmentioning
confidence: 99%
“…given edge weights. Nevertheless, the well-known Kruskal algorithm [Kruskal 1956] is guaranteed to construct an optimum weight spanning tree and it is clearly efficient. (There is also an even more efficient algorithm for this task [Prim 1957].…”
Section: Exhaustive Graph Searchmentioning
confidence: 99%
See 1 more Smart Citation