We study a generalization of deduplication, which enables lossless deduplication of highly similar data and show that classic deduplication with fixed chunk length is a special case. We provide bounds on the expected length of coded sequences for generalized deduplication and show that the coding has asymptotic near-entropy cost under the proposed source model. More importantly, we show that generalized deduplication allows for multiple orders of magnitude faster convergence than classic deduplication. This means that generalized deduplication can provide compression benefits much earlier than classic deduplication, which is key in practical systems. Numerical examples demonstrate our results, showing that our lower bounds are achievable, and illustrating the potential gain of using the generalization over classic deduplication. In fact, we show that even for a simple case of generalized deduplication, the gain in convergence speed is linear with the size of the data chunks.
The amount of data generated worldwide is expected to grow from 33 to 175 ZB by 2025 [1] in part driven by the growth of Internet of Things (IoT) and cyber-physical systems (CPS). To cope with this enormous amount of data, new edge (and cloud) storage techniques must be developed. Generalised Data Deduplication (GDD) is a new paradigm for reducing the cost of storage by systematically identifying near identical data chunks, storing their common component once, and a compact representation of the deviation to the original chunk for each chunk. This paper presents a system architecture for GDD and a proof-of-concept implementation. We evaluated the compression gain of Generalised Data Deduplication using three data sets of varying size and content and compared to the performance of the EXT4 and ZFS file systems, where the latter employs classic deduplication. We show that Generalised Data Deduplication provide up to 16.75% compression gain compared to both EXT4 and ZFS with data sets with less than 5 GB of data.
To provide compressed storage for large amounts of time series data, we present a new strategy for data deduplication. Rather than attempting to deduplicate entire data chunks, we employ a generalized approach, where each chunk is split into a part worth deduplicating and a part that must be stored directly. This simple principle enables a greater compression of the often similar, non-identical, chunks of time series data than is the case for classic deduplication, while keeping benefits such as scalability, robustness, and on-the-fly storage, retrieval, and search for chunks. We analyze the method's theoretical performance, and argue that our method can asymptotically approach the entropy limit for some data configurations. To validate the method's practical merits, we finally show that it is competitive when compared to popular universal compression algorithms on the MIT-BIH ECG Compression Test Database.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.