2020
DOI: 10.1098/rspa.2019.0724
|View full text |Cite
|
Sign up to set email alerts
|

A framework for second-order eigenvector centralities and clustering coefficients

Abstract: We propose and analyse a general tensor-based framework for incorporating second-order features into network measures. This approach allows us to combine traditional pairwise links with information that records whether triples of nodes are involved in wedges or triangles. Our treatment covers classical spectral methods and recently proposed cases from the literature, but we also identify many interesting extensions. In particular, we define a mutually reinforcing (spectral) version of the classical clu… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
14
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
2
1

Relationship

3
4

Authors

Journals

citations
Cited by 14 publications
(14 citation statements)
references
References 59 publications
0
14
0
Order By: Relevance
“…Another relatively well-established idea to represent and work with hypergraphs relies on the use of a higher-order tensors [4,7]. This approach requires a uniform hypergraph, which can be fully described by an adjacency tensor, and can be used to define the centrality scores in terms of the Perron eigenvector of the adjacency tensor.…”
Section: Motivationmentioning
confidence: 99%
“…Another relatively well-established idea to represent and work with hypergraphs relies on the use of a higher-order tensors [4,7]. This approach requires a uniform hypergraph, which can be fully described by an adjacency tensor, and can be used to define the centrality scores in terms of the Perron eigenvector of the adjacency tensor.…”
Section: Motivationmentioning
confidence: 99%
“…This approximation boils down to a mapping given by A = I − L, where L is the (possibly rescaled) normalized Laplacian matrix of the graph L = I − D −1/2 AD −1/2 , A is the adjacency matrix, and A = D −1/2 AD −1/2 is the normalized adjacency matrix. The forward model for a two-layer GCN is the Z = softmax(F ) = softmax(Aσ(AXΘ (1)(2) )…”
Section: Neural Network Approachesmentioning
confidence: 99%
“…where X ∈ R n×d is the matrix of the graph signals (the node features), Θ (1) , Θ (2) are the input-to-hidden and hidden-to-output weight matrices of the network and σ is a nonlinear activation function (typically, σ = ReLU).…”
Section: Neural Network Approachesmentioning
confidence: 99%
See 1 more Smart Citation
“…On the one end, higherorder adjacency tensors have been employed to model higher-order neighborhoods made by hyperedges containing three or more nodes. This approach is typically characterized by the use of hypergraphs or simplicial complexes [3,4,11,17]. On the other hand, non Markovian stochastic processes with memory have been used to model random walks that take into account longer paths of connections [2,6,16,31].…”
Section: Introductionmentioning
confidence: 99%