Proceedings of the Web Conference 2021 2021
DOI: 10.1145/3442381.3450035
|View full text |Cite
|
Sign up to set email alerts
|

Nonlinear Higher-Order Label Spreading

Abstract: Label spreading is a general technique for semi-supervised learning with point cloud or network data, which can be interpreted as a diffusion of labels on a graph. While there are many variants of label spreading, nearly all of them are linear models, where the incoming information to a node is a weighted sum of information from neighboring nodes. Here, we add nonlinearity to label spreading through nonlinear functions of higher-order structure in the graph, namely triangles in the graph. For a broad class of … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
10
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3

Relationship

2
5

Authors

Journals

citations
Cited by 16 publications
(10 citation statements)
references
References 46 publications
0
10
0
Order By: Relevance
“…As a consequence, since U is entrywise positive, there exists a M > 0 such that Φ(F ) ≤ M U for all F such that ϕ(F ) = 1. The thesis thus follows from Theorem 3.1 in [35].…”
Section: The Modelmentioning
confidence: 86%
See 1 more Smart Citation
“…As a consequence, since U is entrywise positive, there exists a M > 0 such that Φ(F ) ≤ M U for all F such that ϕ(F ) = 1. The thesis thus follows from Theorem 3.1 in [35].…”
Section: The Modelmentioning
confidence: 86%
“…Directly modeling these higher-order interactions has led to improvements in a number of machine learning problems [42,6,22,23,39,32,2]. Along this line, there are a number of diffusions or label spreading techniques for semi-supervised learning on hypergraphs [42,14,40,21,24,37,35], which are also built on principles of similarity or assortativity. However, these methods are designed for cases where only labels are available, and do not take advantage of rich features or metadata associated with hypergraphs that are potentially useful for making accurate predictions.…”
Section: Introductionmentioning
confidence: 99%
“…We note that this is somewhat automatically obtained by our choice of smooth function f . In fact, while in the graph setting the non-smooth limit case q → ∞ is to be preferred [48] as each edge contains exactly either one, two or no core nodes, we argue that large but finite values of q are better suited to hypergraphs. In fact, when 1 q < ∞, the cost function f (x) naturally handles possible ambiguity due to the presence of hyperedges with more than two nodes in the core: While the infinity norm x| e ∞ is large if there is at least one core node in e but ignores the presence of a larger number of core nodes, x| e q (for large but finite q), is large when there is at least one core node in e but grows when the hyperedge contains a larger number of such nodes.…”
Section: P(e E)mentioning
confidence: 89%
“…In particular, in [48] this type of hypergraph mapping is used to define generalized eigenvector centrality scores for hypergraphs which include as special cases hypergraph centralities based on tensor eigenvectors [10]. Thus, the proposed hypergraph core-periphery score can be interpreted as a particular hypergraph centrality designed specifically for core-periphery problems and gives mathematical support to the intuition that centrality measures for hypergraphs may be an indication of core and periphery [3].…”
mentioning
confidence: 95%
See 1 more Smart Citation