Proceedings of the 21st ACM International Conference on Information and Knowledge Management 2012
DOI: 10.1145/2396761.2398630
|View full text |Cite
|
Sign up to set email alerts
|

On compressing weighted time-evolving graphs

Abstract: Existing graph compression techniques mostly focus on static graphs. However for many practical graphs such as social networks the edge weights frequently change over time. This phenomenon raises the question of how to compress dynamic graphs while maintaining most of their intrinsic structural patterns at each time snapshot. In this paper we show that the encoding cost of a dynamic graph is proportional to the heterogeneity of a three dimensional tensor that represents the dynamic graph. We propose an effecti… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
11
0

Year Published

2013
2013
2018
2018

Publication Types

Select...
6
2
1

Relationship

5
4

Authors

Journals

citations
Cited by 26 publications
(12 citation statements)
references
References 7 publications
1
11
0
Order By: Relevance
“…Work on compressing dynamic graphs for storage includes lossy compression of timeevolving graphs [Henecka and Roughan 2015], and encoding of dynamic, weighted graphs as three-dimensional arrays (tensor) by reducing heterogeneity and guaranteeing compression error within bounds [Liu et al 2012]. The latter is based on hierarchical clusters of edge weights and graph compression using run-length encoding, traversing first the tensor's time dimension and second the tensor's vertex dimensions.…”
Section: Influence-based Methodsmentioning
confidence: 99%
“…Work on compressing dynamic graphs for storage includes lossy compression of timeevolving graphs [Henecka and Roughan 2015], and encoding of dynamic, weighted graphs as three-dimensional arrays (tensor) by reducing heterogeneity and guaranteeing compression error within bounds [Liu et al 2012]. The latter is based on hierarchical clusters of edge weights and graph compression using run-length encoding, traversing first the tensor's time dimension and second the tensor's vertex dimensions.…”
Section: Influence-based Methodsmentioning
confidence: 99%
“…Threshold-moving tries to move the output threshold toward inexpensive classes such that examples with higher costs become harder to be misclassified [17]. Other similar work on this issue includes [18,19,20,24]. Although some work has been done to solve the data imbalance problem in neural network, quite few literatures related to the imbalance problem of deep network can be seen so far.…”
Section: Cimbalance Problem In Neural Networkmentioning
confidence: 99%
“…The goal is to either reduce storage and manipulation costs, or simplify structure. Summarizing temporal networks has not seen much work, except recent papers based on bits-storage-compression [13], or extracting a list of recurrent sub-structures over time [22]. Unlike these, we are the first to focus on hierarchical condensation: using structural merges, giving a smaller propagation-equivalent temporal network.…”
Section: Related Workmentioning
confidence: 99%