2017
DOI: 10.1109/tpami.2016.2614980
|View full text |Cite
|
Sign up to set email alerts
|

Clustering with Hypergraphs: The Case for Large Hyperedges

Abstract: The extension of conventional clustering to hypergraph clustering, which involves higher order similarities instead of pairwise similarities, is increasingly gaining attention in computer vision. This is due to the fact that many clustering problems require an affinity measure that must involve a subset of data of size more than two. In the context of hypergraph clustering, the calculation of such higher order similarities on data subsets gives rise to hyperedges. Almost all previous work on hypergraph cluster… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
68
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
7
1

Relationship

1
7

Authors

Journals

citations
Cited by 83 publications
(68 citation statements)
references
References 32 publications
0
68
0
Order By: Relevance
“…A hypergraph is often composed of a set of vertices, and a set of non-empty subsets of vertices called hyperedges. Recently, hypergraph learning [66] has shown superior performances in various vision and multimedia tasks, such as image retrieval [20], music recommendation [7], object retrieval [13,40], social event detection [60] and clustering [35]. However, traditional hypergraph structure treats different vertices, hyperedges and modalities equally [66], which is obviously unreasonable, since the importance is actually different.…”
Section: Emotion Numbermentioning
confidence: 99%
“…A hypergraph is often composed of a set of vertices, and a set of non-empty subsets of vertices called hyperedges. Recently, hypergraph learning [66] has shown superior performances in various vision and multimedia tasks, such as image retrieval [20], music recommendation [7], object retrieval [13,40], social event detection [60] and clustering [35]. However, traditional hypergraph structure treats different vertices, hyperedges and modalities equally [66], which is obviously unreasonable, since the importance is actually different.…”
Section: Emotion Numbermentioning
confidence: 99%
“…The scalable sparse subspace clustering by orthogonal matching pursuit (SSC-OMP) method [15] heuristically determines a given number of positions in the coefficient matrix that should be non-zero and then calculates the entry based on self-representation among subsets. However, this general pairwise relationship does not accurately reflect the sample correlation, especially for data pairs in the intersection of subspaces [34]. As a result, these positions may be incorrectly assigned, and the connectivity within each subspace cannot be guaranteed.…”
Section: Related Workmentioning
confidence: 99%
“…Recently, a series of optimization-based methods [18][19][20][21][22][23][24][25] were proposed to solve the multi-model fitting problem, in which [18][19][20][21][22] deal with the multi-model fitting problem as a multi-labelling problem by using energy minimization function and successfully introducing spacial information of the inliers, and [23][24][25] solve the problem by using hypergraphs [26] to describe the relationship of the minimum sampling set and the hypotheses for inlier clustering. However, these optimization-based methods can hardly handle the outliers and need extra inlier threshold or scale estimation techniques.…”
Section: Introductionmentioning
confidence: 99%