Back to Search
Start Over
Biff (Bloom filter) codes: Fast error correction for large data sets.
- Source :
- 2012 IEEE International Symposium on Information Theory Proceedings; 1/ 1/2012, p483-487, 5p
- Publication Year :
- 2012
-
Abstract
- Large data sets are increasingly common in cloud and virtualized environments. For example, transfers of multiple gigabytes are commonplace, as are replicated blocks of such sizes. There is a need for fast error-correction or data reconciliation in such settings even when the expected number of errors is small. Motivated by such cloud reconciliation problems, we consider error-correction schemes designed for large data, after explaining why previous approaches appear unsuitable. We introduce Biff codes, which are based on Bloom filters and are designed for large data. For Biff codes with a message of length L and E errors, the encoding time is O(L), decoding time is O(L + E) and the space overhead is O(E). Biff codes are low-density parity-check codes; they are similar to Tornado codes, but are designed for errors instead of erasures. Further, Biff codes are designed to be very simple, removing any explicit graph structures and based entirely on hash tables. We derive Biff codes by a simple reduction from a set reconciliation algorithm for a recently developed data structure, invertible Bloom lookup tables. While the underlying theory is extremely simple, what makes this code especially attractive is the ease with which it can be implemented and the speed of decoding. We present results from a prototype implementation that decodes messages of 1 million words with thousands of errors in well under a second. [ABSTRACT FROM PUBLISHER]
Details
- Language :
- English
- ISBNs :
- 9781467325806
- Database :
- Complementary Index
- Journal :
- 2012 IEEE International Symposium on Information Theory Proceedings
- Publication Type :
- Conference
- Accession number :
- 86567733
- Full Text :
- https://doi.org/10.1109/ISIT.2012.6284236