2018
DOI: 10.1007/978-3-319-94776-1_37
|View full text |Cite
|
Sign up to set email alerts
|

Generalizing the Hypergraph Laplacian via a Diffusion Process with Mediators

Abstract: In a recent breakthrough STOC 2015 paper, a continuous diffusion process was considered on hypergraphs (which has been refined in a recent JACM 2018 paper) to define a Laplacian operator, whose spectral properties satisfy the celebrated Cheeger's inequality. However, one peculiar aspect of this diffusion process is that each hyperedge directs flow only from vertices with the maximum density to those with the minimum density, while ignoring vertices having strict in-beween densities.In this work, we consider a … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
11
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 10 publications
(11 citation statements)
references
References 20 publications
0
11
0
Order By: Relevance
“…• MLP+Ω H This is the MLP baseline with a regularization term λΩ H added to the objective, where Ω H is the Laplacian energy associated to the hypergraph Laplacian based on mediators [8]. • HGNN This is the hypergraph neural network model of [12], which uses the clique-expansion Laplacian [42,1] for the hypergraph convolutional filter.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…• MLP+Ω H This is the MLP baseline with a regularization term λΩ H added to the objective, where Ω H is the Laplacian energy associated to the hypergraph Laplacian based on mediators [8]. • HGNN This is the hypergraph neural network model of [12], which uses the clique-expansion Laplacian [42,1] for the hypergraph convolutional filter.…”
Section: Methodsmentioning
confidence: 99%
“…One simple case to define L as the Laplacian of the clique expansion graph of H [1,42], where the hypergraph is mapped to a graph on the same set of nodes by adding a clique among the nodes of each hyperedge. This is the approach used in HGNN [12], and other variants uses mediators instead of cliques in the hypergraph to graph reduction [8].…”
Section: Neural Network Approachesmentioning
confidence: 99%
“…One drawback of the clique expansion is that the resulting graph tends to be dense since a hyperedge is replaced by a number of edges that is quadratic in the size of the hyperedge. A similar idea is proposed in [111], but this convolutional neural network is based on a different hypergraph Laplacian shift (proposed in [112]), which only requires a linear number of edges for each hyperedge. This provides a more efficient training when compared with that of [110].…”
Section: Hypergraph Neural Networkmentioning
confidence: 99%
“…Another line of work uses mathematically appealing tensor methods Shashua et al [2006], Bulò and Pelillo [2009], Kolda and Bader [2009], but they are limited to uniform hypergraphs. Recent developments, however, work for arbitrary hypergraphs and fully exploit the hypergraph structure Hein et al [2013], Zhang et al [2017], Chan and Liang [2018], Li and Milenkovic [2018b], Chien et al [2019].…”
Section: Learning On Hypergraphsmentioning
confidence: 99%
“…This hypergraph Laplacian ignores the hypernodes in K e := {k ∈ e : k = i e , k = j e } in the given epoch. Recently, it has been shown that a generalised hypergraph Laplacian in which the hypernodes in K e act as "mediators" Chan and Liang [2018] satisfies all the properties satisfied by the above Laplacian given by . The two Laplacians are pictorially compared in Figure 2.…”
Section: Hypergcn: Enhancing 1-hypergcn With Mediatorsmentioning
confidence: 99%