Back to Search Start Over

Soft-output (SO) GRAND and Iterative Decoding to Outperform LDPCs

Authors :
Yuan, Peihong
Medard, Muriel
Galligan, Kevin
Duffy, Ken R.
Publication Year :
2023

Abstract

We establish that a large, flexible class of long, high redundancy error correcting codes can be efficiently and accurately decoded with guessing random additive noise decoding (GRAND). Performance evaluation demonstrates that it is possible to construct simple concatenated codes that outperform low-density parity-check (LDPC) codes found in the 5G New Radio standard in both additive white Gaussian noise (AWGN) and fading channels. The concatenated structure enables many desirable features, including: low-complexity hardware-friendly encoding and decoding; significant flexibility in length and rate through modularity; and high levels of parallelism in encoding and decoding that enable low latency. Central is the development of a method through which any soft-input (SI) GRAND algorithm can provide soft-output (SO) in the form of an accurate a-posteriori estimate of the likelihood that a decoding is correct or, in the case of list decoding, the likelihood that each element of the list is correct. The distinguishing feature of soft-output GRAND (SOGRAND) is the provision of an estimate that the correct decoding has not been found, even when providing a single decoding. That per-block SO can be converted into accurate per-bit SO by a weighted sum that includes a term for the SI. Implementing SOGRAND adds negligible computation and memory to the existing decoding process, and using it results in a practical, low-latency alternative to LDPC codes.<br />Comment: arXiv admin note: substantial text overlap with arXiv:2305.05777

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2310.10737
Document Type :
Working Paper