2012
DOI: 10.1007/s10618-012-0263-0
|View full text |Cite
|
Sign up to set email alerts
|

Clustering large attributed information networks: an efficient incremental computing approach

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
25
0

Year Published

2014
2014
2023
2023

Publication Types

Select...
5
3
2

Relationship

1
9

Authors

Journals

citations
Cited by 42 publications
(25 citation statements)
references
References 23 publications
0
25
0
Order By: Relevance
“…As expected from the construction of G A , SA-Cluster is computationally expensive, namely, its time complexity is O(n 3 ). In order to improve the efficiency and scalability of SA-Cluster, the methods Inc-Cluster [39,232] and SA-Cluster-Opt [40] have been proposed. The main idea behind them is to reduce the number and the complexity of random walk distance computations.…”
Section: 3mentioning
confidence: 99%
“…As expected from the construction of G A , SA-Cluster is computationally expensive, namely, its time complexity is O(n 3 ). In order to improve the efficiency and scalability of SA-Cluster, the methods Inc-Cluster [39,232] and SA-Cluster-Opt [40] have been proposed. The main idea behind them is to reduce the number and the complexity of random walk distance computations.…”
Section: 3mentioning
confidence: 99%
“…Then, a clustering process is performed using random-walk-based node similarity [7]. The Inc-Cluster (incremental SA-Cluster) improve SA-Cluster by computing node similarity gradually according to the change of attribute weight [9]. Such approaches take into account node attributes in a novel way but the augmented graph will become much more complex when the number of attribute values is large, so they are not suitable for complex social networks.…”
Section: Non-overlapping Community Detection Algorithmsmentioning
confidence: 99%
“…Images in each subgraph represent a concept-preserving cluster (Recall from Section 1). Note that state-of-the-art graph clustering techniques [1,3,13] cannot be directly leveraged to identify these clusters as they do not preserve concepts, typically generate non-overlapping clusters, and do not maximally cover the entire graph. The summary compression process "compresses" S to form a summary at reduced level of detail (denoted by V).…”
Section: The Prism Algorithmmentioning
confidence: 99%