42 results on '"Duffy, Ken"'
Search Results
2. Near-Optimal Generalized Decoding of Polar-like Codes
- Author
-
Yuan, Peihong, Duffy, Ken R., and Médard, Muriel
- Subjects
Computer Science - Information Theory - Abstract
We present a framework that can exploit the tradeoff between the undetected error rate (UER) and block error rate (BLER) of polar-like codes. It is compatible with all successive cancellation (SC)-based decoding methods and relies on a novel approximation that we call codebook probability. This approximation is based on an auxiliary distribution that mimics the dynamics of decoding algorithms following an SC decoding schedule. Simulation results demonstrates that, in the case of SC list (SCL) decoding, the proposed framework outperforms the state-of-art approximations from Forney's generalized decoding rule for polar-like codes with dynamic frozen bits. In addition, dynamic Reed-Muller (RM) codes using the proposed generalized decoding significantly outperform CRC-concatenated polar codes decoded using SCL in both BLER and UER. Finally, we briefly discuss three potential applications of the approximated codebook probability: coded pilot-free channel estimation; bitwise soft-output decoding; and improved turbo product decoding., Comment: being published at IEEE ISIT 2024
- Published
- 2024
3. Soft-output (SO) GRAND and Iterative Decoding to Outperform LDPCs
- Author
-
Yuan, Peihong, Medard, Muriel, Galligan, Kevin, and Duffy, Ken R.
- Subjects
Computer Science - Information Theory - Abstract
We establish that a large, flexible class of long, high redundancy error correcting codes can be efficiently and accurately decoded with guessing random additive noise decoding (GRAND). Performance evaluation demonstrates that it is possible to construct simple concatenated codes that outperform low-density parity-check (LDPC) codes found in the 5G New Radio standard in both additive white Gaussian noise (AWGN) and fading channels. The concatenated structure enables many desirable features, including: low-complexity hardware-friendly encoding and decoding; significant flexibility in length and rate through modularity; and high levels of parallelism in encoding and decoding that enable low latency. Central is the development of a method through which any soft-input (SI) GRAND algorithm can provide soft-output (SO) in the form of an accurate a-posteriori estimate of the likelihood that a decoding is correct or, in the case of list decoding, the likelihood that each element of the list is correct. The distinguishing feature of soft-output GRAND (SOGRAND) is the provision of an estimate that the correct decoding has not been found, even when providing a single decoding. That per-block SO can be converted into accurate per-bit SO by a weighted sum that includes a term for the SI. Implementing SOGRAND adds negligible computation and memory to the existing decoding process, and using it results in a practical, low-latency alternative to LDPC codes., Comment: arXiv admin note: substantial text overlap with arXiv:2305.05777
- Published
- 2023
4. Upgrade error detection to prediction with GRAND
- Author
-
Galligan, Kevin, Yuan, Peihong, Médard, Muriel, and Duffy, Ken R.
- Subjects
Computer Science - Information Theory - Abstract
Guessing Random Additive Noise Decoding (GRAND) is a family of hard- and soft-detection error correction decoding algorithms that provide accurate decoding of any moderate redundancy code of any length. Here we establish a method through which any soft-input GRAND algorithm can provide soft output in the form of an accurate a posteriori estimate of the likelihood that a decoding is correct or, in the case of list decoding, the likelihood that the correct decoding is an element of the list. Implementing the method adds negligible additional computation and memory to the existing decoding process. The output permits tuning the balance between undetected errors and block errors for arbitrary moderate redundancy codes including CRCs
- Published
- 2023
5. PEOPL: Characterizing Privately Encoded Open Datasets with Public Labels
- Author
-
Esfahanizadeh, Homa, Yala, Adam, D'Oliveira, Rafael G. L., Jaba, Andrea J. D., Quach, Victor, Duffy, Ken R., Jaakkola, Tommi S., Vaikuntanathan, Vinod, Ghobadi, Manya, Barzilay, Regina, and Médard, Muriel
- Subjects
Computer Science - Machine Learning ,Computer Science - Cryptography and Security ,Computer Science - Information Theory - Abstract
Allowing organizations to share their data for training of machine learning (ML) models without unintended information leakage is an open problem in practice. A promising technique for this still-open problem is to train models on the encoded data. Our approach, called Privately Encoded Open Datasets with Public Labels (PEOPL), uses a certain class of randomly constructed transforms to encode sensitive data. Organizations publish their randomly encoded data and associated raw labels for ML training, where training is done without knowledge of the encoding realization. We investigate several important aspects of this problem: We introduce information-theoretic scores for privacy and utility, which quantify the average performance of an unfaithful user (e.g., adversary) and a faithful user (e.g., model developer) that have access to the published encoded data. We then theoretically characterize primitives in building families of encoding schemes that motivate the use of random deep neural networks. Empirically, we compare the performance of our randomized encoding scheme and a linear scheme to a suite of computational attacks, and we also show that our scheme achieves competitive prediction accuracy to raw-sample baselines. Moreover, we demonstrate that multiple institutions, using independent random encoders, can collaborate to train improved ML models., Comment: Submitted to IEEE Transactions on Information Forensics and Security
- Published
- 2023
6. Using channel correlation to improve decoding -- ORBGRAND-AI
- Author
-
Duffy, Ken R., Grundei, Moritz, and Medard, Muriel
- Subjects
Computer Science - Information Theory - Abstract
To meet the Ultra Reliable Low Latency Communication (URLLC) needs of modern applications, there have been significant advances in the development of short error correction codes and corresponding soft detection decoders. A substantial hindrance to delivering low-latency is, however, the reliance on interleaving to break up omnipresent channel correlations to ensure that decoder input matches decoder assumptions. Consequently, even when using short codes, the need to wait to interleave data at the sender and de-interleave at the receiver results in significant latency that acts contrary to the goals of URLLC. Moreover, interleaving reduces channel capacity, so that potential decoding performance is degraded. Here we introduce a variant of Ordered Reliability Bits Guessing Random Additive Noise Decoding (ORBGRAND), which we call ORBGRAND-Approximate Independence (ORBGRAND-AI), a soft-detection decoder that can decode any moderate redundancy code and overcomes the limitation of existing decoding paradigms by leveraging channel correlations and circumventing the need for interleaving. By leveraging correlation, not only is latency reduced, but error correction performance can be enhanced by multiple dB, while decoding complexity is also reduced, offering one potential solution for the provision of URLLC.
- Published
- 2023
7. GRAND-EDGE: A Universal, Jamming-resilient Algorithm with Error-and-Erasure Decoding
- Author
-
Ercan, Furkan, Galligan, Kevin, Starobinski, David, Medard, Muriel, Duffy, Ken R., and Yazicigil, Rabia Tugce
- Subjects
Computer Science - Information Theory - Abstract
Random jammers that overpower transmitted signals are a practical concern for many wireless communication protocols. As such, wireless receivers must be able to cope with standard channel noise and jamming (intentional or unintentional). To address this challenge, we propose a novel method to augment the resilience of the recent family of universal error-correcting GRAND algorithms. This method, called Erasure Decoding by Gaussian Elimination (EDGE), impacts the syndrome check block and is applicable to any variant of GRAND. We show that the proposed EDGE method naturally reverts to the original syndrome check function in the absence of erasures caused by jamming. We demonstrate this by implementing and evaluating GRAND-EDGE and ORBGRAND-EDGE. Simulation results, using a Random Linear Code (RLC) with a code rate of $105/128$, show that the EDGE variants lower both the Block Error Rate (BLER) and the computational complexity by up to five order of magnitude compared to the original GRAND and ORBGRAND algorithms. We further compare ORBGRAND-EDGE to Ordered Statistics Decoding (OSD), and demonstrate an improvement of up to three orders of magnitude in the BLER., Comment: 7 pages, 7 figures, accepted for IEEE ICC 2023 conference
- Published
- 2023
8. Soft detection physical layer insecurity
- Author
-
Duffy, Ken R. and Medard, Muriel
- Subjects
Computer Science - Information Theory - Abstract
We establish that during the execution of any Guessing Random Additive Noise Decoding (GRAND) algorithm, an interpretable, useful measure of decoding confidence can be evaluated. This measure takes the form of a log-likelihood ratio (LLR) of the hypotheses that, should a decoding be found by a given query, the decoding is correct versus its being incorrect. That LLR can be used as soft output for a range of applications and we demonstrate its utility by showing that it can be used to confidently discard likely erroneous decodings in favor of returning more readily managed erasures. We show that feature can be used to compromise the physical layer security of short length wiretap codes by accurately and confidently revealing a proportion of a communication when code-rate is far above the Shannon capacity of the associated hard detection channel.
- Published
- 2022
9. Physical layer insecurity
- Author
-
Médard, Muriel and Duffy, Ken R.
- Subjects
Computer Science - Information Theory - Abstract
In the classic wiretap model, Alice wishes to reliably communicate to Bob without being overheard by Eve who is eavesdropping over a degraded channel. Systems for achieving that physical layer security often rely on an error correction code whose rate is below the Shannon capacity of Alice and Bob's channel, so Bob can reliably decode, but above Alice and Eve's, so Eve cannot reliably decode. For the finite block length regime, several metrics have been proposed to characterise information leakage. Here we assess a new metric, the success exponent, and demonstrate it can be operationalized through the use of Guessing Random Additive Noise Decoding (GRAND) to compromise the physical-layer security of any moderate length code. Success exponents are the natural beyond-capacity analogue of error exponents that characterise the probability that a maximum likelihood decoding is correct when the code-rate is above Shannon capacity, which is exponentially decaying in the code-length. Success exponents can be used to approximately evaluate the frequency with which Eve's decoding is correct in beyond-capacity channel conditions. Through the use of GRAND, we demonstrate that Eve can constrain her decoding procedure so that when she does identify a decoding, it is correct with high likelihood, significantly compromising Alice and Bob's communication by truthfully revealing a proportion of it. We provide general mathematical expressions for the determination of success exponents as well as for the evaluation of Eve's query number threshold, using the binary symmetric channel as a worked example. As GRAND algorithms are code-book agnostic and can decode any code structure, we provide empirical results for Random Linear Codes as exemplars. Simulation results demonstrate the practical possibility of compromising physical layer security.
- Published
- 2022
- Full Text
- View/download PDF
10. GRAND-assisted Optimal Modulation
- Author
-
Ozaydin, Basak, Médard, Muriel, and Duffy, Ken
- Subjects
Computer Science - Information Theory - Abstract
Optimal modulation (OM) schemes for Gaussian channels with peak and average power constraints are known to require nonuniform probability distributions over signal points, which presents practical challenges. An established way to map uniform binary sources to non-uniform symbol distributions is to assign a different number of bits to different constellation points. Doing so, however, means that erroneous demodulation at the receiver can lead to bit insertions or deletions that result in significant binary error propagation. In this paper, we introduce a light-weight variant of Guessing Random Additive Noise Decoding (GRAND) to resolve insertion and deletion errors at the receiver by using a simple padding scheme. Performance evaluation demonstrates that our approach results in an overall gain in demodulated bit-error-rate of over 2 dB Eb/N0 when compared to 128-Quadrature Amplitude Modulation (QAM). The GRAND-aided OM scheme outperforms coding with a low-density parity check code of the same average rate as that induced by our simple padding., Comment: Presented at IEEE Globecom 2022
- Published
- 2022
11. A General Security Approach for Soft-information Decoding against Smart Bursty Jammers
- Author
-
Ercan, Furkan, Galligan, Kevin, Duffy, Ken R., Medard, Muriel, Starobinski, David, and Yazicigil, Rabia Tugce
- Subjects
Computer Science - Information Theory ,Computer Science - Cryptography and Security - Abstract
Malicious attacks such as jamming can cause significant disruption or complete denial of service (DoS) to wireless communication protocols. Moreover, jamming devices are getting smarter, making them difficult to detect. Forward error correction, which adds redundancy to data, is commonly deployed to protect communications against the deleterious effects of channel noise. Soft-information error correction decoders obtain reliability information from the receiver to inform their decoding, but in the presence of a jammer such information is misleading and results in degraded error correction performance. As decoders assume noise occurs independently to each bit, a bursty jammer will lead to greater degradation in performance than a non-bursty one. Here we establish, however, that such temporal dependencies can aid inferences on which bits have been subjected to jamming, thus enabling counter-measures. In particular, we introduce a pre-decoding processing step that updates log-likelihood ratio (LLR) reliability information to reflect inferences in the presence of a jammer, enabling improved decoding performance for any soft detection decoder. The proposed method requires no alteration to the decoding algorithm. Simulation results show that the method correctly infers a significant proportion of jamming in any received frame. Results with one particular decoding algorithm, the recently introduced ORBGRAND, show that the proposed method reduces the block-error rate (BLER) by an order of magnitude for a selection of codes, and prevents complete DoS at the receiver., Comment: Accepted for GLOBECOM 2022 Workshops. Contains 7 pages and 7 figures
- Published
- 2022
12. Soft decoding without soft demapping with ORBGRAND
- Author
-
An, Wei, Medard, Muriel, and Duffy, Ken R.
- Subjects
Computer Science - Information Theory - Abstract
For spectral efficiency, higher order modulation symbols confer information on more than one bit. As soft detection forward error correction decoders assume the availability of information at binary granularity, however, soft demappers are required to compute per-bit reliabilities from complex-valued signals. Here we show that the recently introduced universal soft detection decoder ORBGRAND can be adapted to work with symbol-level soft information, obviating the need for energy expensive soft demapping. We establish that doing so reduces complexity while retaining the error correction performance achieved with the optimal demapper.
- Published
- 2022
13. Block turbo decoding with ORBGRAND
- Author
-
Galligan, Kevin, Médard, Muriel, and Duffy, Ken R.
- Subjects
Computer Science - Information Theory - Abstract
Guessing Random Additive Noise Decoding (GRAND) is a family of universal decoding algorithms suitable for decoding any moderate redundancy code of any length. We establish that, through the use of list decoding, soft-input variants of GRAND can replace the Chase algorithm as the component decoder in the turbo decoding of product codes. In addition to being able to decode arbitrary product codes, rather than just those with dedicated hard-input component code decoders, results show that ORBGRAND achieves a coding gain of up to 0.7dB over the Chase algorithm with same list size.
- Published
- 2022
14. GRAND for Fading Channels using Pseudo-soft Information
- Author
-
Sarieddeen, Hadi, Médard, Muriel, and Duffy, Ken. R.
- Subjects
Computer Science - Information Theory - Abstract
Guessing random additive noise decoding (GRAND) is a universal maximum-likelihood decoder that recovers code-words by guessing rank-ordered putative noise sequences and inverting their effect until one or more valid code-words are obtained. This work explores how GRAND can leverage additive-noise statistics and channel-state information in fading channels. Instead of computing per-bit reliability information in detectors and passing this information to the decoder, we propose leveraging the colored noise statistics following channel equalization as pseudo-soft information for sorting noise sequences. We investigate the efficacy of pseudo-soft information extracted from linear zero-forcing and minimum mean square error equalization when fed to a hardware-friendly soft-GRAND (ORBGRAND). We demonstrate that the proposed pseudo-soft GRAND schemes approximate the performance of state-of-the-art decoders of CA-Polar and BCH codes that avail of complete soft information. Compared to hard-GRAND, pseudo-soft ORBGRAND introduces up to 10dB SNR gains for a target 10^-3 block-error rate., Comment: To appear in the IEEE GLOBECOM 2022 proceedings. arXiv admin note: text overlap with arXiv:2207.10836
- Published
- 2022
- Full Text
- View/download PDF
15. Soft-input, soft-output joint detection and GRAND
- Author
-
Sarieddeen, Hadi, Médard, Muriel, and Duffy, Ken. R.
- Subjects
Computer Science - Information Theory - Abstract
Guessing random additive noise decoding (GRAND) is a maximum likelihood (ML) decoding method that identifies the noise effects corrupting code-words of arbitrary code-books. In a joint detection and decoding framework, this work demonstrates how GRAND can leverage crude soft information in received symbols and channel state information to generate, through guesswork, soft bit reliability outputs in log-likelihood ratios (LLRs). The LLRs are generated via successive computations of Euclidean-distance metrics corresponding to candidate noise-recovered words. Noting that the entropy of noise is much smaller than that of information bits, a small number of noise effect guesses generally suffices to hit a code-word, which allows generating LLRs for critical bits; LLR saturation is applied to the remaining bits. In an iterative (turbo) mode, the generated LLRs at a given soft-input, soft-output GRAND iteration serve as enhanced a priori information that adapts noise-sequence guess ordering in a subsequent iteration. Simulations demonstrate that a few turbo-GRAND iterations match the performance of ML-detection-based soft-GRAND in both AWGN and Rayleigh fading channels at a complexity cost that, on average, grows linearly (instead of exponentially) with the number of symbols., Comment: To appear in the IEEE GLOBECOM 2022 proceedings
- Published
- 2022
- Full Text
- View/download PDF
16. On the Role of Quantization of Soft Information in GRAND
- Author
-
Yuan, Peihong, Duffy, Ken R., Gabhart, Evan P., and Médard, Muriel
- Subjects
Computer Science - Information Theory - Abstract
In this work, we investigate guessing random additive noise decoding (GRAND) with quantized soft input. First, we analyze the achievable rate of ordered reliability bits GRAND (ORBGRAND), which uses the rank order of the reliability as quantized soft information. We show that multi-line ORBGRAND can approach capacity for any signal-to-noise ratio (SNR). We then introduce discretized soft GRAND (DSGRAND), which uses information from a conventional quantizer. Simulation results show that DSGRAND well approximates maximum-likelihood (ML) decoding with a number of quantization bits that is in line with current soft decoding implementations. For a (128,106) CRC-concatenated polar code, the basic ORBGRAND is able to match or outperform CRC-aided successive cancellation list (CA-SCL) decoding with codeword list size of 64 and 3 bits of quantized soft information, while DSGRAND outperforms CA-SCL decoding with a list size of 128 codewords. Both ORBGRAND and DSGRAND exhibit approximately an order of magnitude less average complexity and two orders of magnitude smaller memory requirements than CA-SCL.
- Published
- 2022
17. AES as Error Correction: Cryptosystems for Reliable Communication
- Author
-
Cohen, Alejandro, D'Oliveira, Rafael G. L., Duffy, Ken R., Woo, Jongchan, and Médard, Muriel
- Subjects
Computer Science - Information Theory ,Computer Science - Cryptography and Security - Abstract
In this paper, we show that the Advanced Encryption Standard (AES) cryptosystem can be used as an error-correcting code to obtain reliability over noisy communication and data systems. Moreover, we characterize a family of computational cryptosystems that can potentially be used as well performing error correcting codes. In particular, we show that simple padding followed by a cryptosystem with uniform or pseudo-uniform outputs can approach the error-correcting performance of random codes. We empirically contrast the performance of the proposed approach using AES as error correction with that of Random Linear Codes and CA-Polar codes and show that in practical scenarios, they achieve almost the same performance. Finally, we present a modified counter mode of operation, named input plaintext counter mode, in order to utilize AES for multiple blocks while retaining its error correcting capabilities.
- Published
- 2022
18. Ordered Reliability Bits Guessing Random Additive Noise Decoding
- Author
-
Duffy, Ken R., An, Wei, and Medard, Muriel
- Subjects
Computer Science - Information Theory ,94A15, 68P30 - Abstract
Error correction techniques traditionally focus on the co-design of restricted code-structures in tandem with code-specific decoders that are computationally efficient when decoding long codes in hardware. Modern applications are, however, driving demand for ultra-reliable low-latency communications (URLLC), rekindling interest in the performance of shorter, higher-rate error correcting codes, and raising the possibility of revisiting universal, code-agnostic decoders. To that end, here we introduce a soft-detection variant of Guessing Random Additive Noise Decoding (GRAND) called Ordered Reliability Bits GRAND that can accurately decode any moderate redundancy block-code. It is designed with efficient circuit implementation in mind, and determines accurate decodings while retaining the original hard detection GRAND algorithm's suitability for a highly parallelized implementation in hardware. ORBGRAND is shown to provide excellent soft decision block error performance for codes of distinct classes (BCH, CA-Polar and RLC) with modest complexity, while providing better block error rate performance than CA-SCL, a state of the art soft detection CA-Polar decoder. ORBGRAND offers the possibility of an accurate, energy efficient soft detection decoder suitable for delivering URLLC in a single hardware realization.
- Published
- 2022
- Full Text
- View/download PDF
19. Partial Encryption after Encoding for Security and Reliability in Data Systems
- Author
-
Cohen, Alejandro, D'Oliveira, Rafael G. L., Duffy, Ken R., and Médard, Muriel
- Subjects
Computer Science - Information Theory ,Computer Science - Cryptography and Security - Abstract
We consider the problem of secure and reliable communication over a noisy multipath network. Previous work considering a noiseless version of our problem proposed a hybrid universal network coding cryptosystem (HUNCC). By combining an information-theoretically secure encoder together with partial encryption, HUNCC is able to obtain security guarantees, even in the presence of an all-observing eavesdropper. In this paper, we propose a version of HUNCC for noisy channels (N-HUNCC). This modification requires four main novelties. First, we present a network coding construction which is jointly, individually secure and error-correcting. Second, we introduce a new security definition which is a computational analogue of individual security, which we call individual indistinguishability under chosen ciphertext attack (individual IND-CCA1), and show that NHUNCC satisfies it. Third, we present a noise based decoder for N-HUNCC, which permits the decoding of the encoded-thenencrypted data. Finally, we discuss how to select parameters for N-HUNCC and its error-correcting capabilities.
- Published
- 2022
20. CRC Codes as Error Correction Codes
- Author
-
An, Wei, Médard, Muriel, and Duffy, Ken R.
- Subjects
Computer Science - Information Theory - Abstract
CRC codes have long since been adopted in a vast range of applications. The established notion that they are suitable primarily for error detection can be set aside through use of the recently proposed Guessing Random Additive Noise Decoding (GRAND). Hard-detection (GRAND-SOS) and soft-detection (ORBGRAND) variants can decode any short, high-rate block code, making them suitable for error correction of CRC-coded data. When decoded with GRAND, short CRC codes have error correction capability that is at least as good as popular codes such as BCH codes, but with no restriction on either code length or rate. The state-of-the-art CA-Polar codes are concatenated CRC and Polar codes. For error correction, we find that the CRC is a better short code than either Polar or CA-Polar codes. Moreover, the standard CA-SCL decoder only uses the CRC for error detection and therefore suffers severe performance degradation in short, high rate settings when compared with the performance GRAND provides, which uses all of the CA-Polar bits for error correction. Using GRAND, existing systems can be upgraded from error detection to low-latency error correction without re-engineering the encoder, and additional applications of CRCs can be found in IoT, Ultra-Reliable Low Latency Communication (URLLC), and beyond. The universality of GRAND, its ready parallelized implementation in hardware, and the good performance of CRC as codes make their combination a viable solution for low-latency applications., Comment: This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible
- Published
- 2021
- Full Text
- View/download PDF
21. A Coding Theory Perspective on Multiplexed Molecular Profiling of Biological Tissues
- Author
-
D'Alessio, Luca, Liu, Litian, Duffy, Ken, Eldar, Yonina C., Medard, Muriel, and Babadi, Mehrtash
- Subjects
Computer Science - Information Theory ,Electrical Engineering and Systems Science - Signal Processing - Abstract
High-throughput and quantitative experimental technologies are experiencing rapid advances in the biological sciences. One important recent technique is multiplexed fluorescence in situ hybridization (mFISH), which enables the identification and localization of large numbers of individual strands of RNA within single cells. Core to that technology is a coding problem: with each RNA sequence of interest being a codeword, how to design a codebook of probes, and how to decode the resulting noisy measurements? Published work has relied on assumptions of uniformly distributed codewords and binary symmetric channels for decoding and to a lesser degree for code construction. Here we establish that both of these assumptions are inappropriate in the context of mFISH experiments and substantial decoding performance gains can be obtained by using more appropriate, less classical, assumptions. We propose a more appropriate asymmetric channel model that can be readily parameterized from data and use it to develop a maximum a posteriori (MAP) decoders. We show that false discovery rate for rare RNAs, which is the key experimental metric, is vastly improved with MAP decoders even when employed with the existing sub-optimal codebook. Using an evolutionary optimization methodology, we further show that by permuting the codebook to better align with the prior, which is an experimentally straightforward procedure, significant further improvements are possible., Comment: This paper is accepted to The International Symposium on Information Theory and Its Applications (ISITA) 2020
- Published
- 2021
22. Keep the bursts and ditch the interleavers
- Author
-
An, Wei, Médard, Muriel, and Duffy, Ken R.
- Subjects
Computer Science - Information Theory - Abstract
To facilitate applications in IoT, 5G, and beyond, there is an engineering need to enable high-rate, low-latency communications. Errors in physical channels typically arrive in clumps, but most decoders are designed assuming that channels are memoryless. As a result, communication networks rely on interleaving over tens of thousands of bits so that channel conditions match decoder assumptions. Even for short high rate codes, awaiting sufficient data to interleave at the sender and de-interleave at the receiver is a significant source of unwanted latency. Using existing decoders with non-interleaved channels causes a degradation in block error rate performance owing to mismatch between the decoder's channel model and true channel behaviour. Through further development of the recently proposed Guessing Random Additive Noise Decoding (GRAND) algorithm, which we call GRAND-MO for GRAND Markov Order, here we establish that by abandoning interleaving and embracing bursty noise, low-latency, short-code, high-rate communication is possible with block error rates that outperform their interleaved counterparts by a substantial margin. Moreover, while most decoders are twinned to a specific code-book structure, GRAND-MO can decode any code. Using this property, we establish that certain well-known structured codes are ill-suited for use in bursty channels, but Random Linear Codes (RLCs) are robust to correlated noise. This work suggests that the use of RLCs with GRAND-MO is a good candidate for applications requiring high throughput with low latency., Comment: 6 pages
- Published
- 2020
- Full Text
- View/download PDF
23. Noise Recycling
- Author
-
Cohen, Alejandro, Solomon, Amit, Duffy, Ken R., and Médard, Muriel
- Subjects
Computer Science - Information Theory - Abstract
We introduce Noise Recycling, a method that enhances decoding performance of channels subject to correlated noise without joint decoding. The method can be used with any combination of codes, code-rates and decoding techniques. In the approach, a continuous realization of noise is estimated from a lead channel by subtracting its decoded output from its received signal. This estimate is then used to improve the accuracy of decoding of an orthogonal channel that is experiencing correlated noise. In this design, channels aid each other only through the provision of noise estimates post-decoding. In a Gauss-Markov model of correlated noise, we constructive establish that noise recycling employing a simple successive order enables higher rates than not recycling noise. Simulations illustrate noise recycling can be employed with any code and decoder, and that noise recycling shows Block Error Rate (BLER) benefits when applying the same predetermined order as used to enhance the rate region. Finally, for short codes we establish that an additional BLER improvement is possible through noise recycling with racing, where the lead channel is not pre-determined, but is chosen on the fly based on which decoder completes first., Comment: Appear in IEEE International Symposium on Information Theory, ISIT 2020, based on arXiv:2006.04897
- Published
- 2020
24. Soft Maximum Likelihood Decoding using GRAND
- Author
-
Solomon, Amit, Duffy, Ken R., and Médard, Muriel
- Subjects
Computer Science - Information Theory - Abstract
Maximum Likelihood (ML) decoding of forward error correction codes is known to be optimally accurate, but is not used in practice as it proves too challenging to efficiently implement. Here we introduce a ML decoder called SGRAND, which is a development of a previously described hard detection ML decoder called GRAND, that fully avails of soft detection information and is suitable for use with any arbitrary high-rate, short-length block code. We assess SGRAND's performance on CRC-aided Polar (CA-Polar) codes, which will be used for all control channel communication in 5G NR, comparing its accuracy with CRC-Aided Successive Cancellation List decoding (CA-SCL), a state-of-the-art soft-information decoder specific to CA-Polar codes.
- Published
- 2020
25. Ordered Reliability Bits Guessing Random Additive Noise Decoding
- Author
-
Duffy, Ken R.
- Subjects
Computer Science - Information Theory - Abstract
Modern applications are driving demand for ultra-reliable low-latency communications, rekindling interest in the performance of short, high-rate error correcting codes. To that end, here we introduce a soft-detection variant of Guessing Random Additive Noise Decoding (GRAND) called Ordered Reliability Bits GRAND that can decode any short, high-rate block-code. For a code of $n$ bits, it avails of no more than $\lceil\log_2(n)\rceil$ bits of code-book-independent quantized soft detection information per received bit to determine an accurate decoding while retaining the original algorithm's suitability for a highly parallelized implementation in hardware. ORBGRAND is shown to provide similar block error performance for codes of distinct classes (BCH, CA-Polar and RLC) with low complexity, while providing better block error rate performance than CA-SCL, a state of the art soft detection CA-Polar decoder.
- Published
- 2020
- Full Text
- View/download PDF
26. 5G NR CA-Polar Maximum Likelihood Decoding by GRAND
- Author
-
Duffy, Ken, Solomon, Amit, Konwar, Kishori M., and Medard, Muriel
- Subjects
Computer Science - Information Theory ,94A05 ,E.4 - Abstract
CA-Polar codes have been selected for all control channel communications in 5G NR, but accurate, computationally feasible decoders are still subject to development. Here we report the performance of a recently proposed class of optimally precise Maximum Likelihood (ML) decoders, GRAND, that can be used with any block-code. As published theoretical results indicate that GRAND is computationally efficient for short-length, high-rate codes and 5G CA-Polar codes are in that class, here we consider GRAND's utility for decoding them. Simulation results indicate that decoding of 5G CA-Polar codes by GRAND, and a simple soft detection variant, is a practical possibility.
- Published
- 2019
27. Guessing random additive noise decoding with symbol reliability information (SRGRAND)
- Author
-
Duffy, Ken R., Médard, Muriel, and An, Wei
- Subjects
Computer Science - Information Theory ,E.4 - Abstract
The design and implementation of error correcting codes has long been informed by two fundamental results: Shannon's 1948 capacity theorem, which established that long codes use noisy channels most efficiently; and Berlekamp, McEliece, and Van Tilborg's 1978 theorem on the NP-hardness of decoding linear codes. These results shifted focus away from creating code-independent decoders, but recent low-latency communication applications necessitate relatively short codes, providing motivation to reconsider the development of universal decoders. We introduce a scheme for employing binarized symbol soft information within Guessing Random Additive Noise Decoding, a universal hard detection decoder. We incorporate codebook-independent quantization of soft information to indicate demodulated symbols to be reliable or unreliable. We introduce two decoding algorithms: one identifies a conditional Maximum Likelihood (ML) decoding; the other either reports a conditional ML decoding or an error. For random codebooks, we present error exponents and asymptotic complexity, and show benefits over hard detection. As empirical illustrations, we compare performance with majority logic decoding of Reed-Muller codes, with Berlekamp-Massey decoding of Bose-Chaudhuri-Hocquenghem codes, with CA-SCL decoding of CA-Polar codes, and establish the performance of Random Linear Codes, which require a universal decoder and offer a broader palette of code sizes and rates than traditional codes., Comment: This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible
- Published
- 2019
28. Capacity-achieving Guessing Random Additive Noise Decoding (GRAND)
- Author
-
Duffy, Ken R., Li, Jiange, and Médard, Muriel
- Subjects
Computer Science - Information Theory ,94A24 ,E.4 - Abstract
We introduce a new algorithm for realizing Maximum Likelihood (ML) decoding in discrete channels with or without memory. In it, the receiver rank orders noise sequences from most likely to least likely. Subtracting noise from the received signal in that order, the first instance that results in a member of the code-book is the ML decoding. We name this algorithm GRAND for Guessing Random Additive Noise Decoding. We establish that GRAND is capacity-achieving when used with random code-books. For rates below capacity we identify error exponents, and for rates beyond capacity we identify success exponents. We determine the scheme's complexity in terms of the number of computations the receiver performs. For rates beyond capacity, this reveals thresholds for the number of guesses by which if a member of the code-book is identified it is likely to be the transmitted code-word. We introduce an approximate ML decoding scheme where the receiver abandons the search after a fixed number of queries, an approach we dub GRANDAB, for GRAND with ABandonment. While not an ML decoder, we establish that the algorithm GRANDAB is also capacity-achieving for an appropriate choice of abandonment threshold, and characterize its complexity, error and success exponents. Worked examples are presented for Markovian noise that indicate these decoding schemes substantially out-perform the brute force decoding approach., Comment: IEEE Transactions on Information Theory, to appear
- Published
- 2018
- Full Text
- View/download PDF
29. A Characterization of Guesswork on Swiftly Tilting Curves
- Author
-
Beirami, Ahmad, Calderbank, Robert, Christiansen, Mark, Duffy, Ken, and Médard, Muriel
- Subjects
Computer Science - Information Theory - Abstract
Given a collection of strings, each with an associated probability of occurrence, the guesswork of each of them is their position in a list ordered from most likely to least likely, breaking ties arbitrarily. Guesswork is central to several applications in information theory: Average guesswork provides a lower bound on the expected computational cost of a sequential decoder to decode successfully the transmitted message; the complementary cumulative distribution function of guesswork gives the error probability in list decoding; the logarithm of guesswork is the number of bits needed in optimal lossless one-to-one source coding; and guesswork is the number of trials required of an adversary to breach a password protected system in a brute-force attack. In this paper, we consider memoryless string-sources that generate strings consisting of i.i.d. characters drawn from a finite alphabet, and characterize their corresponding guesswork. Our main tool is the tilt operation. We show that the tilt operation on a memoryless string-source parametrizes an exponential family of memoryless string-sources, which we refer to as the tilted family. We provide an operational meaning to the tilted families by proving that two memoryless string-sources result in the same guesswork on all strings of all lengths if and only if their respective categorical distributions belong to the same tilted family. Establishing some general properties of the tilt operation, we generalize the notions of weakly typical set and asymptotic equipartition property to tilted weakly typical sets of different orders. We use this new definition to characterize the large deviations for all atypical strings and characterize the volume of weakly typical sets of different orders. We subsequently build on this characterization to prove large deviation bounds on guesswork and provide an accurate approximation of its PMF., Comment: Accepted for publication in IEEE Trans. Inf. Theory
- Published
- 2018
- Full Text
- View/download PDF
30. Guesswork Subject to a Total Entropy Budget
- Author
-
Rezaee, Arman, Beirami, Ahmad, Makhdoumi, Ali, Medard, Muriel, and Duffy, Ken
- Subjects
Computer Science - Information Theory ,Computer Science - Cryptography and Security - Abstract
We consider an abstraction of computational security in password protected systems where a user draws a secret string of given length with i.i.d. characters from a finite alphabet, and an adversary would like to identify the secret string by querying, or guessing, the identity of the string. The concept of a "total entropy budget" on the chosen word by the user is natural, otherwise the chosen password would have arbitrary length and complexity. One intuitively expects that a password chosen from the uniform distribution is more secure. This is not the case, however, if we are considering only the average guesswork of the adversary when the user is subject to a total entropy budget. The optimality of the uniform distribution for the user's secret string holds when we have also a budget on the guessing adversary. We suppose that the user is subject to a "total entropy budget" for choosing the secret string, whereas the computational capability of the adversary is determined by his "total guesswork budget." We study the regime where the adversary's chances are exponentially small in guessing the secret string chosen subject to a total entropy budget. We introduce a certain notion of uniformity and show that a more uniform source will provide better protection against the adversary in terms of his chances of success in guessing the secret string. In contrast, the average number of queries that it takes the adversary to identify the secret string is smaller for the more uniform secret string subject to the same total entropy budget., Comment: In Proc. of Allerton 2017 (19 pages, 4 figures)
- Published
- 2017
31. Privacy with Estimation Guarantees
- Author
-
Wang, Hao, Vo, Lisa, Calmon, Flavio P., Médard, Muriel, Duffy, Ken R., and Varia, Mayank
- Subjects
Computer Science - Information Theory ,Computer Science - Machine Learning - Abstract
We study the central problem in data privacy: how to share data with an analyst while providing both privacy and utility guarantees to the user that owns the data. In this setting, we present an estimation-theoretic analysis of the privacy-utility trade-off (PUT). Here, an analyst is allowed to reconstruct (in a mean-squared error sense) certain functions of the data (utility), while other private functions should not be reconstructed with distortion below a certain threshold (privacy). We demonstrate how chi-square information captures the fundamental PUT in this case and provide bounds for the best PUT. We propose a convex program to compute privacy-assuring mappings when the functions to be disclosed and hidden are known a priori and the data distribution is known. We derive lower bounds on the minimum mean-squared error of estimating a target function from the disclosed data and evaluate the robustness of our approach when an empirical distribution is used to compute the privacy-assuring mappings instead of the true data distribution. We illustrate the proposed approach through two numerical experiments.
- Published
- 2017
32. Principal Inertia Components and Applications
- Author
-
Calmon, Flavio P., Makhdoumi, Ali, Médard, Muriel, Varia, Mayank, Christiansen, Mark, and Duffy, Ken R.
- Subjects
Computer Science - Information Theory - Abstract
We explore properties and applications of the Principal Inertia Components (PICs) between two discrete random variables $X$ and $Y$. The PICs lie in the intersection of information and estimation theory, and provide a fine-grained decomposition of the dependence between $X$ and $Y$. Moreover, the PICs describe which functions of $X$ can or cannot be reliably inferred (in terms of MMSE) given an observation of $Y$. We demonstrate that the PICs play an important role in information theory, and they can be used to characterize information-theoretic limits of certain estimation problems. In privacy settings, we prove that the PICs are related to fundamental limits of perfect privacy., Comment: Overlaps with arXiv:1405.1472 and arXiv:1310.1512
- Published
- 2017
33. A Perspective on Future Research Directions in Information Theory
- Author
-
Andrews, Jeffrey G., Dimakis, Alexandros, Dolecek, Lara, Effros, Michelle, Medard, Muriel, Milenkovic, Olgica, Montanari, Andrea, Vishwanath, Sriram, Yeh, Edmund, Berry, Randall, Duffy, Ken, Feizi, Soheil, Kato, Saul, Kellis, Manolis, Licht, Stuart, Sorenson, Jon, Varshney, Lav, and Vikalo, Haris
- Subjects
Computer Science - Information Theory - Abstract
Information theory is rapidly approaching its 70th birthday. What are promising future directions for research in information theory? Where will information theory be having the most impact in 10-20 years? What new and emerging areas are ripe for the most impact, of the sort that information theory has had on the telecommunications industry over the last 60 years? How should the IEEE Information Theory Society promote high-risk new research directions and broaden the reach of information theory, while continuing to be true to its ideals and insisting on the intellectual rigor that makes its breakthroughs so powerful? These are some of the questions that an ad hoc committee (composed of the present authors) explored over the past two years. We have discussed and debated these questions, and solicited detailed inputs from experts in fields including genomics, biology, economics, and neuroscience. This report is the result of these discussions.
- Published
- 2015
34. Hiding Symbols and Functions: New Metrics and Constructions for Information-Theoretic Security
- Author
-
Calmon, Flavio du Pin, Médard, Muriel, Varia, Mayank, Duffy, Ken R., Christiansen, Mark M., and Zeger, Linda M.
- Subjects
Computer Science - Information Theory - Abstract
We present information-theoretic definitions and results for analyzing symmetric-key encryption schemes beyond the perfect secrecy regime, i.e. when perfect secrecy is not attained. We adopt two lines of analysis, one based on lossless source coding, and another akin to rate-distortion theory. We start by presenting a new information-theoretic metric for security, called symbol secrecy, and derive associated fundamental bounds. We then introduce list-source codes (LSCs), which are a general framework for mapping a key length (entropy) to a list size that an eavesdropper has to resolve in order to recover a secret message. We provide explicit constructions of LSCs, and demonstrate that, when the source is uniformly distributed, the highest level of symbol secrecy for a fixed key length can be achieved through a construction based on minimum-distance separable (MDS) codes. Using an analysis related to rate-distortion theory, we then show how symbol secrecy can be used to determine the probability that an eavesdropper correctly reconstructs functions of the original plaintext. We illustrate how these bounds can be applied to characterize security properties of symmetric-key encryption schemes, and, in particular, extend security claims based on symbol secrecy to a functional setting., Comment: Submitted to IEEE Transactions on Information Theory
- Published
- 2015
35. Optimization-Based Linear Network Coding for General Connections of Continuous Flows
- Author
-
Cui, Ying, Médard, Muriel, Yeh, Edmund, Leith, Douglas, and Duffy, Ken
- Subjects
Computer Science - Information Theory - Abstract
For general connections, the problem of finding network codes and optimizing resources for those codes is intrinsically difficult and little is known about its complexity. Most of the existing solutions rely on very restricted classes of network codes in terms of the number of flows allowed to be coded together, and are not entirely distributed. In this paper, we consider a new method for constructing linear network codes for general connections of continuous flows to minimize the total cost of edge use based on mixing. We first formulate the minimumcost network coding design problem. To solve the optimization problem, we propose two equivalent alternative formulations with discrete mixing and continuous mixing, respectively, and develop distributed algorithms to solve them. Our approach allows fairly general coding across flows and guarantees no greater cost than any solution without network coding., Comment: 1 fig, technical report of ICC 2015
- Published
- 2015
36. A Linear Network Code Construction for General Integer Connections Based on the Constraint Satisfaction Problem
- Author
-
Cui, Ying, Médard, Muriel, Lai, Fan, Yeh, Edmund, Leith, Douglas, Duffy, Ken, and Pandya, Dhaivat
- Subjects
Computer Science - Information Theory - Abstract
The problem of finding network codes for general connections is inherently difficult in capacity constrained networks. Resource minimization for general connections with network coding is further complicated. Existing methods for identifying solutions mainly rely on highly restricted classes of network codes, and are almost all centralized. In this paper, we introduce linear network mixing coefficients for code constructions of general connections that generalize random linear network coding (RLNC) for multicast connections. For such code constructions, we pose the problem of cost minimization for the subgraph involved in the coding solution and relate this minimization to a path-based Constraint Satisfaction Problem (CSP) and an edge-based CSP. While CSPs are NP-complete in general, we present a path-based probabilistic distributed algorithm and an edge-based probabilistic distributed algorithm with almost sure convergence in finite time by applying Communication Free Learning (CFL). Our approach allows fairly general coding across flows, guarantees no greater cost than routing, and shows a possible distributed implementation. Numerical results illustrate the performance improvement of our approach over existing methods., Comment: submitted to TON (conference version published at IEEE GLOBECOM 2015)
- Published
- 2015
37. Multi-user guesswork and brute force security
- Author
-
Christiansen, Mark M., Duffy, Ken R., Calmon, Flavio du Pin, and Medard, Muriel
- Subjects
Computer Science - Information Theory - Abstract
The Guesswork problem was originally motivated by a desire to quantify computational security for single user systems. Leveraging recent results from its analysis, we extend the remit and utility of the framework to the quantification of the computational security for multi-user systems. In particular, assume that $V$ users independently select strings stochastically from a finite, but potentially large, list. An inquisitor who does not know which strings have been selected wishes to identify $U$ of them. The inquisitor knows the selection probabilities of each user and is equipped with a method that enables the testing of each (user, string) pair, one at a time, for whether that string had been selected by that user. Here we establish that, unless $U=V$, there is no general strategy that minimizes the distribution of the number of guesses, but in the asymptote as the strings become long we prove the following: by construction, there is an asymptotically optimal class of strategies; the number of guesses required in an asymptotically optimal strategy satisfies a large deviation principle with a rate function, which is not necessarily convex, that can be determined from the rate functions of optimally guessing individual users' strings; if all user's selection statistics are identical, the exponential growth rate of the average guesswork as the string-length increases is determined by the specific R\'enyi entropy of the string-source with parameter $(V-U+1)/(V-U+2)$, generalizing the known $V=U=1$ case; and that the Shannon entropy of the source is a lower bound on the average guesswork growth rate for all $U$ and $V$, thus providing a bound on computational security for multi-user systems. Examples are presented to illustrate these results and their ramifications for systems design.
- Published
- 2014
- Full Text
- View/download PDF
38. Guessing a password over a wireless channel (on the effect of noise non-uniformity)
- Author
-
Christiansen, Mark M., Duffy, Ken R., Calmon, Flavio du Pin, and Medard, Muriel
- Subjects
Computer Science - Information Theory - Abstract
A string is sent over a noisy channel that erases some of its characters. Knowing the statistical properties of the string's source and which characters were erased, a listener that is equipped with an ability to test the veracity of a string, one string at a time, wishes to fill in the missing pieces. Here we characterize the influence of the stochastic properties of both the string's source and the noise on the channel on the distribution of the number of attempts required to identify the string, its guesswork. In particular, we establish that the average noise on the channel is not a determining factor for the average guesswork and illustrate simple settings where one recipient with, on average, a better channel than another recipient, has higher average guesswork. These results stand in contrast to those for the capacity of wiretap channels and suggest the use of techniques such as friendly jamming with pseudo-random sequences to exploit this guesswork behavior., Comment: Asilomar Conference on Signals, Systems & Computers, 2013
- Published
- 2013
39. Bounds on inference
- Author
-
Calmon, Flavio du Pin, Varia, Mayank, Médard, Muriel, Christiansen, Mark M., Duffy, Ken R., and Tessaro, Stefano
- Subjects
Computer Science - Information Theory - Abstract
Lower bounds for the average probability of error of estimating a hidden variable X given an observation of a correlated random variable Y, and Fano's inequality in particular, play a central role in information theory. In this paper, we present a lower bound for the average estimation error based on the marginal distribution of X and the principal inertias of the joint distribution matrix of X and Y. Furthermore, we discuss an information measure based on the sum of the largest principal inertias, called k-correlation, which generalizes maximal correlation. We show that k-correlation satisfies the Data Processing Inequality and is convex in the conditional distribution of Y given X. Finally, we investigate how to answer a fundamental question in inference and privacy: given an observation Y, can we estimate a function f(X) of the hidden random variable X with an average error below a certain threshold? We provide a general method for answering this question using an approach based on rate-distortion theory., Comment: Allerton 2013 with extended proof, 10 pages
- Published
- 2013
40. Brute force searching, the typical set and Guesswork
- Author
-
Christiansen, Mark M., Duffy, Ken R., Calmon, Flavio du Pin, and Medard, Muriel
- Subjects
Computer Science - Information Theory ,Computer Science - Cryptography and Security - Abstract
Consider the situation where a word is chosen probabilistically from a finite list. If an attacker knows the list and can inquire about each word in turn, then selecting the word via the uniform distribution maximizes the attacker's difficulty, its Guesswork, in identifying the chosen word. It is tempting to use this property in cryptanalysis of computationally secure ciphers by assuming coded words are drawn from a source's typical set and so, for all intents and purposes, uniformly distributed within it. By applying recent results on Guesswork, for i.i.d. sources it is this equipartition ansatz that we investigate here. In particular, we demonstrate that the expected Guesswork for a source conditioned to create words in the typical set grows, with word length, at a lower exponential rate than that of the uniform approximation, suggesting use of the approximation is ill-advised., Comment: ISIT 2013, with extended proof
- Published
- 2013
41. Lists that are smaller than their parts: A coding approach to tunable secrecy
- Author
-
Calmon, Flavio du Pin, Médard, Muriel, Zeger, Linda M., Barros, João, Christiansen, Mark M., and Duffy, Ken. R.
- Subjects
Computer Science - Information Theory ,Computer Science - Cryptography and Security - Abstract
We present a new information-theoretic definition and associated results, based on list decoding in a source coding setting. We begin by presenting list-source codes, which naturally map a key length (entropy) to list size. We then show that such codes can be analyzed in the context of a novel information-theoretic metric, \epsilon-symbol secrecy, that encompasses both the one-time pad and traditional rate-based asymptotic metrics, but, like most cryptographic constructs, can be applied in non-asymptotic settings. We derive fundamental bounds for \epsilon-symbol secrecy and demonstrate how these bounds can be achieved with MDS codes when the source is uniformly distributed. We discuss applications and implementation issues of our codes., Comment: Allerton 2012, 8 pages
- Published
- 2012
42. Guesswork, large deviations and Shannon entropy
- Author
-
Christiansen, Mark M. and Duffy, Ken R.
- Subjects
Computer Science - Information Theory ,94A17 - Abstract
How hard is it guess a password? Massey showed that that the Shannon entropy of the distribution from which the password is selected is a lower bound on the expected number of guesses, but one which is not tight in general. In a series of subsequent papers under ever less restrictive stochastic assumptions, an asymptotic relationship as password length grows between scaled moments of the guesswork and specific R\'{e}nyi entropy was identified. Here we show that, when appropriately scaled, as the password length grows the logarithm of the guesswork satisfies a Large Deviation Principle (LDP), providing direct estimates of the guesswork distribution when passwords are long. The rate function governing the LDP possess a specific, restrictive form that encapsulates underlying structure in the nature of guesswork. Returning to Massey's original observation, a corollary to the LDP shows that expectation of the logarithm of the guesswork is the specific Shannon entropy of the password selection process.
- Published
- 2012
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.