Back to Search
Start Over
Data Deduplication With Random Substitutions.
- Source :
-
IEEE Transactions on Information Theory . Oct2022, Vol. 68 Issue 10, p6941-6963. 23p. - Publication Year :
- 2022
-
Abstract
- Data deduplication saves storage space by identifying and removing repeats in the data stream. Compared with traditional compression methods, data deduplication schemes are more computationally efficient and are thus widely used in large scale storage systems. In this paper, we provide an information-theoretic analysis of the performance of deduplication algorithms on data streams in which repeats are not exact. We introduce a source model in which probabilistic substitutions are considered. More precisely, each symbol in a repeated string is substituted with a given edit probability. Deduplication algorithms in both the fixed-length scheme and the variable-length scheme are studied. The fixed-length deduplication algorithm is shown to be unsuitable for the proposed source model as it does not take into account the edit probability. Two modifications are proposed and shown to have performances within a constant factor of optimal for a specific class of source models with the knowledge of model parameters. We also study the conventional variable-length deduplication algorithm and show that as source entropy becomes smaller, the size of the compressed string vanishes relative to the length of the uncompressed string, leading to high compression ratios. [ABSTRACT FROM AUTHOR]
- Subjects :
- *ALGORITHMS
*LARGE scale systems
*IMAGE compression
*DATA compression
Subjects
Details
- Language :
- English
- ISSN :
- 00189448
- Volume :
- 68
- Issue :
- 10
- Database :
- Academic Search Index
- Journal :
- IEEE Transactions on Information Theory
- Publication Type :
- Academic Journal
- Accession number :
- 159210737
- Full Text :
- https://doi.org/10.1109/TIT.2022.3176778