2014
DOI: 10.1109/tpami.2013.135
|View full text |Cite
|
Sign up to set email alerts
|

Consistent Latent Position Estimation and Vertex Classification for Random Dot Product Graphs

Abstract: In this work, we show that using the eigen-decomposition of the adjacency matrix, we can consistently estimate latent positions for random dot product graphs provided the latent positions are i.i.d. from some distribution. If class labels are observed for a number of vertices tending to infinity, then we show that the remaining vertices can be classified with error converging to Bayes optimal using the $(k)$-nearest-neighbors classification rule. We evaluate the proposed methods on simulated data and a graph d… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
103
0

Year Published

2014
2014
2024
2024

Publication Types

Select...
6
1
1

Relationship

3
5

Authors

Journals

citations
Cited by 90 publications
(104 citation statements)
references
References 23 publications
1
103
0
Order By: Relevance
“…It is proved in [14,41] that the adjacency spectral embedding provides a consistent estimate of the true latent positions in random dot product graphs. The key to this result is a tight concentration, in Frobenius norm, of the adjacency spectral embedding, X, about the true latent positions X.…”
Section: Definition 4 Given An Adjacency Matrixmentioning
confidence: 99%
See 1 more Smart Citation
“…It is proved in [14,41] that the adjacency spectral embedding provides a consistent estimate of the true latent positions in random dot product graphs. The key to this result is a tight concentration, in Frobenius norm, of the adjacency spectral embedding, X, about the true latent positions X.…”
Section: Definition 4 Given An Adjacency Matrixmentioning
confidence: 99%
“…Therefore, each entry of U P (A−P )U P is of order O(log n) asymptotically almost surely, and as a consequence, √ nρ n ) (see [41]).…”
Section: Proof Of Theorem 15mentioning
confidence: 99%
“…A further generalization asserts that each node is in its own group, and therefore has a "latent position" that characterizes its probability of connecting with other nodes (homologous to latent variable models in neural coding) [88]. A particularly popular version of these models assumes that the probability of connections between a pair of nodes is equal to the dot product between the nodes' latent positions [89][90][91]. In these models, an extensive set of theoretical investigations have established the kinds of claims we desire when using a statistical model to make inferences about our data [92,93], as well as a number of extensions, including a generalized random dot product [94], a random dot product with node-wise covariates [95], and a latent structure model [96] (for review, see [97]).…”
Section: Statistical Models Of Connectomesmentioning
confidence: 99%
“…First, eigen‐decomposition of the adjacency matrix of random dot product graph G ′ leads to consistent latent positions, where vertex nomination performance via k ‐nearest‐neighbor of these latent positions converges to Bayes optimal as | V | = n → ∞ and k / m → 0 (where the size of the graph increases, but the relative proportion of vertex classes and of interesting vertices does not). Empirical results on random, induced subgraphs of the Wikipedia graph validate the application of this technique to real data . The proofs in Ref depend on the generative graph model being a random dot product graph, which is relaxed to an unknown link function by Tang et al Taken together, this pair of papers begins to bridge the gap between theoretical results and naturally occurring data on this topic.…”
Section: Literature Reviewmentioning
confidence: 99%