1,065 results on '"Finite length"'
Search Results
2. Spatially Coupled Generalized LDPC Codes: Asymptotic Analysis and Finite Length Scaling.
- Author
-
Mitchell, David G. M., Olmos, Pablo M., Lentmaier, Michael, and Costello, Daniel J.
- Subjects
- *
LOW density parity check codes , *BLOCK codes , *ITERATIVE decoding , *LINEAR codes , *TANNER graphs , *DRUM set - Abstract
Generalized low-density parity-check (GLDPC) codes are a class of LDPC codes in which the standard single parity check (SPC) constraints are replaced by constraints defined by a linear block code. These stronger constraints typically result in improved error floor performance, due to better minimum distance and trapping set properties, at a cost of some increased decoding complexity. In this paper, we study spatially coupled generalized low-density parity-check (SC-GLDPC) codes and present a comprehensive analysis of these codes, including: (1) an iterative decoding threshold analysis of SC-GLDPC code ensembles demonstrating capacity approaching thresholds via the threshold saturation effect; (2) an asymptotic analysis of the minimum distance and free distance properties of SC-GLDPC code ensembles, demonstrating that the ensembles are asymptotically good; and (3) an analysis of the finite-length scaling behavior of both GLDPC block codes and SC-GLDPC codes based on a peeling decoder (PD) operating on a binary erasure channel (BEC). Results are compared to GLDPC block codes, and the advantages and disadvantages of SC-GLDPC codes are discussed. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
3. Explicit and Efficient WOM Codes of Finite Length
- Author
-
Alexander Vardy, Yeow Meng Chee, Han Mao Kiah, and Eitan Yaakobi
- Subjects
High rate ,Discrete mathematics ,Binary number ,020206 networking & telecommunications ,02 engineering and technology ,Library and Information Sciences ,Upper and lower bounds ,Computer Science Applications ,Rate of convergence ,0202 electrical engineering, electronic engineering, information engineering ,Figure of merit ,Decoding methods ,Small probability ,Information Systems ,Coding (social sciences) ,Mathematics - Abstract
Write-once memory (WOM) is a storage device consisting of binary cells that can only increase their levels. A $t$ -write WOM code is a coding scheme that makes it possible to write $t$ times to a WOM without decreasing the levels of any of the cells. The sum-rate of a WOM code is the ratio between the total number of bits written to the memory during the $t$ writes and the number of cells. It is known that the maximum possible sum-rate of a $t$ -write WOM code is $\log (t+1)$ . This is also an achievable upper bound, both by information-theoretic arguments and through explicit constructions. While existing constructions of WOM codes are targeted at the sum-rate, we consider here two more figures of merit. The first one is the complexity of the encoding and decoding maps. The second figure of merit is the convergence rate , defined as the minimum code length $n(\delta)$ required to reach a point that is $\delta $ -close to the capacity region. One of our main results in this paper is a capacity-achieving construction of two-write WOM codes which has polynomial encoding/decoding complexity while the block length $n(\delta)$ required to be $\delta $ -close to capacity is significantly smaller than existing constructions. Using these two-write WOM codes, we then obtain three-write WOM codes that approach a sum-rate of 1.809 at relatively short block lengths. We also provide several explicit constructions of finite length three-write WOM codes; in particular, we achieve a sum-rate of 1.716 by using only 93 cells. Finally, we modify our two-write WOM codes to construct $\epsilon $ -error WOM codes of high rates and small probability of failure.
- Published
- 2020
4. Explicit and Efficient WOM Codes of Finite Length.
- Author
-
Chee, Yeow Meng, Kiah, Han Mao, Vardy, Alexander, and Yaakobi, Eitan
- Subjects
- *
CIPHERS , *ADDITIVE white Gaussian noise channels , *FLASH memory - Abstract
Write-once memory (WOM) is a storage device consisting of binary cells that can only increase their levels. A $t$ -write WOM code is a coding scheme that makes it possible to write $t$ times to a WOM without decreasing the levels of any of the cells. The sum-rate of a WOM code is the ratio between the total number of bits written to the memory during the $t$ writes and the number of cells. It is known that the maximum possible sum-rate of a $t$ -write WOM code is $\log (t+1)$. This is also an achievable upper bound, both by information-theoretic arguments and through explicit constructions. While existing constructions of WOM codes are targeted at the sum-rate, we consider here two more figures of merit. The first one is the complexity of the encoding and decoding maps. The second figure of merit is the convergence rate, defined as the minimum code length $n(\delta)$ required to reach a point that is $\delta $ -close to the capacity region. One of our main results in this paper is a capacity-achieving construction of two-write WOM codes which has polynomial encoding/decoding complexity while the block length $n(\delta)$ required to be $\delta $ -close to capacity is significantly smaller than existing constructions. Using these two-write WOM codes, we then obtain three-write WOM codes that approach a sum-rate of 1.809 at relatively short block lengths. We also provide several explicit constructions of finite length three-write WOM codes; in particular, we achieve a sum-rate of 1.716 by using only 93 cells. Finally, we modify our two-write WOM codes to construct $\epsilon $ -error WOM codes of high rates and small probability of failure. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
5. Near-Optimal Finite-Length Scaling for Polar Codes Over Large Alphabets.
- Author
-
Pfister, Henry D. and Urbanke, Rudiger L.
- Subjects
- *
MATRIX effect , *BINARY codes , *LYAPUNOV functions , *CIPHERS , *FINITE fields - Abstract
For any prime power $q$ , Mori and Tanaka introduced a family of $q$ -ary polar codes based on the $q\,\,\times \,\,q$ Reed–Solomon polarization kernels. For transmission over a $q$ -ary erasure channel, they also derived a closed-form recursion for the erasure probability of each effective channel. In this paper, we use that expression to analyze the finite-length scaling of these codes on the $q$ -ary erasure channel with erasure probability $\epsilon \in (0,1)$. Our primary result is that for any $\gamma >0$ and $\delta >0$ , there is a $q_{0}$ , such that for all $q\geq q_{0}$ , the fraction of effective channels with erasure rate at most $N^{-\gamma }$ is at least $1-\epsilon -O(N^{-1/2+\delta })$ , where $N=q^{n}$ is the blocklength. Since this fraction cannot be larger than $1-\epsilon -O(N^{-1/2})$ , this establishes near-optimal finite-length scaling for this family of codes. Our approach can be seen as an extension of a similar analysis for binary polar codes by Hassani, Alishahi, and Urbanke. A similar analysis is also considered for $q$ -ary polar codes with $m \times m$ polarizing matrices. This separates the effect of the alphabet size from the effect of the matrix size. If the polarizing matrix at each stage is drawn independently and uniformly from the set of invertible $m \times m$ matrices, then the linear operator associated with the Lyapunov function analysis can be written in the closed form. To prove near-optimal scaling for polar codes with fixed $q$ as $m$ increases, however, two technical obstacles remain. Thus, we conclude by stating two concrete mathematical conjectures that, if proven, would imply near-optimal scaling for fixed $q$. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
6. Finite-length scaling for iteratively decoded LDPC ensembles
- Author
-
Amraoui, Abdelaziz, Montanari, Andrea, Richardson, Tom, and Urbanke, Rudiger
- Subjects
Decoders -- Usage ,Decoders -- Analysis ,Error-correcting codes -- Analysis ,Finite element method -- Usage - Abstract
We investigate the behavior of iteratively decoded low-density parity-check (LDPC) codes over the binary erasure channel in the so-called "waterfall region." We show that the performance curves in this region follow a simple scaling law. We conjecture that essentially the same scaling behavior applies in a much more general setting and we provide some empirical evidence to support this conjecture. The scaling law, together with the error floor expressions developed previously, can be used for a fast finite-length optimization. Index Terms--Binary erasure channel, density evolution, error probability curve, finite-length analysis, iterative decoding, low-density parity-check (LDPC) codes.
- Published
- 2009
7. A finite-length algorithm for LDPC codes without repeated edges on the binary erasure channel
- Author
-
Johnson, Sarah J.
- Subjects
Algorithm ,Algorithms -- Analysis - Abstract
This paper considers the performance, on the binary erasure channel, of low-density parity-check (LDPC) codes without repeated edges in their Tanner graphs. A modification to existing finite-length analysis algorithms is presented for these codes. Index Terms--Binary erasure channels, finite-length analysis, iterative decoding, low-density parity-check (LDPC) codes, message-passing decoding.
- Published
- 2009
8. Finite-Length Analysis of Spatially-Coupled Regular LDPC Ensembles on Burst-Erasure Channels
- Author
-
Laurent Schmalen, Narayanan Rengaswamy, and Vahid Aref
- Subjects
FOS: Computer and information sciences ,Computer science ,Computer Science - Information Theory ,Information Theory (cs.IT) ,05 social sciences ,Code word ,050801 communication & media studies ,020206 networking & telecommunications ,02 engineering and technology ,Library and Information Sciences ,Binary erasure channel ,Computer Science Applications ,0508 media and communications ,0202 electrical engineering, electronic engineering, information engineering ,Erasure ,Node (circuits) ,Low-density parity-check code ,Algorithm ,Decoding methods ,Computer Science::Information Theory ,Information Systems ,Block (data storage) ,Communication channel - Abstract
Regular spatially-Coupled LDPC (SC-LDPC) ensembles have gained significant interest since they were shown to universally achieve the capacity of binary memoryless channels under low-complexity belief-propagation decoding. In this work, we focus primarily on the performance of these ensembles over binary channels affected by bursts of erasures. We first develop an analysis of the finite length performance for a single burst per codeword and no errors otherwise. We first assume that the burst erases a complete spatial position, modeling for instance node failures in distributed storage. We provide new tight lower bounds for the block erasure probability ($P_{\mathrm{B}}$) at finite block length and bounds on the coupling parameter for being asymptotically able to recover the burst. We further show that expurgating the ensemble can improve the block erasure probability by several orders of magnitude. Later we extend our methodology to more general channel models. In a first extension, we consider bursts that can start at a random position in the codeword and span across multiple spatial positions. Besides the finite length analysis, we determine by means of density evolution the maximnum correctable burst length. In a second extension, we consider the case where in addition to a single burst, random bit erasures may occur. Finally, we consider a block erasure channel model which erases each spatial position independently with some probability $p$, potentially introducing multiple bursts simultaneously. All results are verified using Monte-Carlo simulations., Comment: accepted for publication in IEEE Transactions on Information Theory
- Published
- 2018
9. The universal LZ77 compression algorithm is essentially optimal for individual finite-length N-blocks
- Author
-
Ziv, Jacob
- Subjects
Algorithm ,Algorithms -- Usage ,Data compression -- Analysis - Abstract
Consider the case where consecutive blocks of N letters of a semi-infinite individual sequence X over a finite alphabet are being compressed into binary sequences by some one-to-one mapping. No a priori information about X is available at the encoder, which must therefore adopt a universal data-compression algorithm. It is known that there exist a number of asymptotically optimal universal data compression algorithms (e.g., the Lempel-Ziv (LZ) algorithm, Context tree algorithm and an adaptive Hufmann algorithm) such that when successively applied to N-blocks then, the best error-free compression for the particular individual sequence X is achieved as N tends to infinity. The best possible compression that may be achieved by any universal data compression algorithm for finite N-blocks is discussed. Essential optimality for the compression of finite-length sequences is defined. It is shown that the LZ77 universal compression of N-blocks is essentially optimal for finite N-blocks. Previously, it has been demonstrated that a universal context tree compression of N blocks is essentially optimal as well. Index Terms--Context-tree coding, data compression, universal compression.
- Published
- 2009
10. An improved sphere-packing bound for finite-length codes over symmetric memoryless channels
- Author
-
Wiechman, Gil and Sason, Igal
- Subjects
Decoders -- Analysis ,Gaussian processes -- Analysis - Abstract
This paper derives an improved sphere-packing (ISP) bound for finite-length error-correcting codes whose transmission takes place over symmetric memoryless channels, and the codes are decoded with an arbitrary list decoder. We first review classical results, i.e., the 1959 sphere-packing (SP59) bound of Shannon for the Gaussian channel, and the 1967 sphere-packing (SP67) bound of Shannon et al. for discrete memoryless channels. An improvement on the SP67 bound, as suggested by Valembois and Fossorier, is also discussed. These concepts are used for the derivation of a new lower bound on the error probability of list decoding (referred to as the ISP bound) which is uniformly tighter than the SP67 bound and its improved version. The ISP bound is applicable to symmetric memoryless channels, and some of its applications are presented. Its tightness under maximum-likelihood (ML) decoding is studied by comparing the ISP bound to previously reported upper and lower bounds on the ML decoding error probability, and also to computer simulations of iteratively decoded turbo-like codes. This paper also presents a technique which performs the entire calculation of the SP59 bound in the logarithmic domain, thus facilitating the exact calculation of this bound for moderate to large block lengths without the need for the asymptotic approximations provided by Shannon. Index Terms--Block codes, error exponent, list decoding, sphere-packing bound, turbo-like codes.
- Published
- 2008
11. Turbo decoding on the binary erasure channel: finite-length analysis and turbo stopping sets
- Author
-
Rosnes, Eirik and Ytrehus, Oyvind
- Subjects
Algorithm ,Algorithms -- Usage ,Decoders -- Design and construction ,Communications circuits -- Design and construction - Abstract
This paper is devoted to the finite-length analysis of turbo decoding over the binary erasure channel (BEC). The performance of iterative belief-propagation decoding of low-density parity-check (LDPC) codes over the BEC can be characterized in terms of stopping sets. We describe turbo decoding on the BEC which is simpler than turbo decoding on other channels. We then adapt the concept of stopping sets to turbo decoding and state an exact condition for decoding failure. Apply turbo decoding until the transmitted codeword has been recovered, or the decoder fails to progress further. Then the set of erased positions that will remain when the decoder stops is equal to the unique maximum-size turbo stopping set which is also a subset of the set of erased positions. Furthermore, we present some improvements of the basic turbo decoding algorithm on the BEC. The proposed improved turbo decoding algorithm has substantially better error performance as illustrated by the given simulation results. Finally, we give an expression for the turbo stopping set size enumerating function under the uniform interleaver assumption, and an efficient enumeration algorithm of small-size turbo stopping sets for a particular interleaver. The solution is based on the algorithm proposed by Garello et al. in 2001 to compute an exhaustive list of all low-weight codewords in a turbo code. Index Terms--Binary erasure channel (BEC), improved decoding, stopping set, turbo decoding, uniform interleaver, weight spectrum.
- Published
- 2007
12. Finite-length analysis of low-density parity-check codes on the binary erasure channel
- Author
-
Di, Changyan, Proietti, David, Telatar, I. Emre, Richardson, Thomas J., and Urbanke, Rudiger L.
- Subjects
Information theory -- Models - Abstract
In this paper, we are concerned with the finite-length analysis of low-density parity-check (LDPC) codes when used over the binary erasure channel (BEC). The main result is an expression for the exact average bit and block erasure probability for a given regular ensemble of LDPC codes when decoded iteratively. We also give expressions for upper bounds on the average bit and block erasure probability for regular LDPC ensembles and the standard random ensemble under maximum-likelihood (ML) decoding. Finally, we present what we consider to be the most important open problems in this area. Index Terms--Belief propagation, binary erasure channel (BEC), finite-length analysis, low-density parity-check (LDPC) codes.
- Published
- 2002
13. Finite-Length Analysis of Spatially-Coupled Regular LDPC Ensembles on Burst-Erasure Channels.
- Author
-
Aref, Vahid, Schmalen, Laurent, and Rengaswamy, Narayanan
- Subjects
- *
BINARY erasure channels (Telecommunications) , *MEMORYLESS systems , *PARITY-check matrix , *ITERATIVE decoding , *MONTE Carlo method , *MATHEMATICAL bounds - Abstract
Regular spatially-coupled low-density parity-check ensembles have gained significant interest, since they were shown to universally achieve the capacity of binary memoryless channels under low-complexity belief-propagation decoding. In this paper, we focus primarily on the performance of these ensembles over binary channels affected by bursts of erasures. We first develop an analysis of the finite length performance for a single burst per code word and no errors otherwise. We first assume that the burst erases a complete spatial position, modeling for instance node failures in distributed storage. We provide new tight lower bounds for the block erasure probability ( P_ \mathrm \scriptscriptstyle B ) at finite block length and bounds on the coupling parameter for being asymptotically able to recover the burst. We further show that expurgating the ensemble can improve the block erasure probability by several orders of magnitude. Later we extend our methodology to more general channel models. In a first extension, we consider bursts that can start at a random location in the code word and span across multiple spatial positions. Besides the finite length analysis, we determine by means of density evolution the maximum correctable burst length. In a second extension, we consider the case where in addition to a single burst, random bit erasures may occur. Finally, we consider a block erasure channel model which erases each spatial position independently with some probability $p$ , potentially introducing multiple bursts simultaneously. All results are verified using Monte-Carlo simulations. [ABSTRACT FROM PUBLISHER]
- Published
- 2018
- Full Text
- View/download PDF
14. Finite-Length Analysis of BATS Codes.
- Author
-
Yang, Shenghao, Ng, Tsz-Ching, and Yeung, Raymond W.
- Subjects
- *
CODING theory , *DECODING algorithms , *PROBABILITY theory , *MATHEMATICAL optimization , *WIRELESS communications - Abstract
BATS codes were proposed for communication through networks with packet loss. A BATS code consists of an outer code and an inner code. The outer code is a matrix generation of a fountain code, which works with the inner code that comprises random linear coding at the intermediate network nodes. In this paper, the performance of finite-length BATS codes is analyzed with respect to both belief propagation (BP) decoding and inactivation decoding. Our results enable us to evaluate efficiently the finite-length performance in terms of the number of batches used for decoding ranging from 1 to a given maximum number, and provide new insights on the decoding performance. Specifically, for a fixed number of input symbols and a range of the number of batches used for decoding, we obtain recursive formulae to calculate the stopping time distribution of BP decoding and the inactivation probability in inactivation decoding. We also find that both the failure probability of BP decoding and the expected number of inactivations in inactivation decoding can be expressed in a power-sum form where the number of batches appears only as the exponent. This power-sum expression reveals clearly how the decoding failure probability and the expected number of inactivation decrease with the number of batches. When the number of batches used for decoding follows a Poisson distribution, we further derive recursive formulas with potentially lower computational complexity for both decoding algorithms. For the BP decoder that consumes batches one by one, three formulae are provided to characterize the expected number of consumed batches until all the input symbols are decoded. [ABSTRACT FROM PUBLISHER]
- Published
- 2018
- Full Text
- View/download PDF
15. On the nonlinear complexity and Lempel-Ziv complexity of finite length sequences
- Author
-
Limniotis, Konstantinos, Kolokotronis, Nicholas, and Kalouptsidis, Nicholas
- Subjects
Algorithm ,Algorithms -- Usage ,Eigenvalues -- Measurement ,Sequences (Mathematics) -- Properties ,Cryptography -- Research - Abstract
The nonlinear complexity of binary sequences and its connections with Lempel-Ziv complexity is studied in this paper. A new recursive algorithm is presented, which produces the minimal nonlinear feedback shift register of a given binary sequence. Moreover, it is shown that the eigenvalue profile of a sequence uniquely determines its nonlinear complexity profile, thus establishing a connection between Lempel-Ziv complexity and nonlinear complexity. Furthermore, a lower bound for the Lempel-Ziv compression ratio of a given sequence is proved that depends on its nonlinear complexity. Index Terms--Compression, cryptography, eigenvalue, Lempel-Ziv complexity, nonlinear complexity, nonlinear feedback shift registers, sequences.
- Published
- 2007
16. On the Finite Length Scaling of <tex-math notation='LaTeX'>$q$ </tex-math> -Ary Polar Codes
- Author
-
Dina Goldin and David Burshtein
- Subjects
Discrete mathematics ,0209 industrial biotechnology ,Inverse ,020206 networking & telecommunications ,02 engineering and technology ,Code rate ,Library and Information Sciences ,Polarization (waves) ,Upper and lower bounds ,Computer Science Applications ,Channel capacity ,020901 industrial engineering & automation ,ComputingMethodologies_SYMBOLICANDALGEBRAICMANIPULATION ,0202 electrical engineering, electronic engineering, information engineering ,Polar ,Scaling ,Decoding methods ,Information Systems ,Mathematics - Abstract
The polarization process of polar codes over a prime $q$ -ary alphabet is studied. Recently, it has been shown that the blocklength of polar codes with prime alphabet size scales polynomially with respect to the inverse of the gap between code rate and channel capacity. However, except for the binary case, the degree of the polynomial in the bound is extremely large. In this paper, a different approach to computing the degree of this polynomial for any prime alphabet size is shown. This approach yields a lower degree polynomial for various values of the alphabet size that were examined. It is also shown that even lower degree polynomial can be computed with an additional numerical effort.
- Published
- 2018
17. Optimal Finite-Length and Asymptotic Index Codes for Five or Fewer Receivers.
- Author
-
Ong, Lawrence
- Subjects
- *
LINEAR network coding , *VARIABLE-length codes , *ASYMPTOTIC normality , *SUBGRAPHS ,COMPUTER access control code words - Abstract
Index coding models broadcast networks in which a sender sends different messages to different receivers simultaneously, where each receiver may know some of the messages a priori. The aim is to find the minimum (normalized) index codelength that the sender sends. This paper considers unicast index coding, where each receiver requests exactly one message, and each message is requested by exactly one receiver. Each unicast index-coding instances can be fully described by a directed graph and vice versa, where each vertex corresponds to one receiver. For any directed graph representing a unicast index-coding instance, we show that if a maximum acyclic induced subgraph (MAIS) is obtained by removing two or fewer vertices from the graph, then the minimum index codelength equals the number of vertices in the MAIS, and linear codes are optimal for the corresponding index-coding instance. Using this result, we solved all unicast index-coding instances with up to five receivers, which correspond to all graphs with up to five vertices. For 9818 non-isomorphic graphs among all graphs up to five vertices, we obtained the minimum index codelength for all message alphabet sizes; for the remaining 28 graphs, we obtained the minimum index codelength if the message alphabet size is k^2 for any positive integer $k$ . This work complements the result by Arbabjolfaei et al. (ISIT 2013), who solved all unicast index-coding instances with up to five receivers in the asymptotic regime, where the message alphabet size tends to infinity. [ABSTRACT FROM PUBLISHER]
- Published
- 2017
- Full Text
- View/download PDF
18. MMSE decision-feedback equalizers: finite-length results
- Author
-
Al-Dhahir, Naofal and Cioffi, John M.
- Subjects
Equalizers (Electronics) -- Research - Abstract
This paper extends a number of results on the infinite-length minimum-mean-square-error decision feedback equalizer (MMSE-DFE) reported in [8] to the finite-length case. Cholesky factorization and displacement structure theory are demonstrated to be two powerful analytical tools for analyzing the finite-length MMSE-DFE. Our objective throughout the paper is to establish finite-length analogs of the well-known infinite-length MMSE-DFE results. Similarities and differences between the two cases are examined and delineated. Finally, convergence of our derived finite-length results to their well-established infinite-length counterparts is shown. Index Terms - MMSE-DFE, finite-length constraint, displacement structure, Cholesky factorization.
- Published
- 1995
19. On the Finite Length Scaling of $q$ -Ary Polar Codes.
- Author
-
Goldin, Dina and Burshtein, David
- Subjects
- *
CODING theory , *POLYNOMIALS , *DISCRETE memoryless channels , *MATHEMATICAL bounds , *BROADCAST channels - Abstract
The polarization process of polar codes over a prime $q$ -ary alphabet is studied. Recently, it has been shown that the blocklength of polar codes with prime alphabet size scales polynomially with respect to the inverse of the gap between code rate and channel capacity. However, except for the binary case, the degree of the polynomial in the bound is extremely large. In this paper, a different approach to computing the degree of this polynomial for any prime alphabet size is shown. This approach yields a lower degree polynomial for various values of the alphabet size that were examined. It is also shown that even lower degree polynomial can be computed with an additional numerical effort. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
20. Finite-Length Analysis of Caching-Aided Coded Multicasting
- Author
-
Mingyue Ji, Antonia M. Tulino, Alexandros G. Dimakis, Karthikeyan Shanmugam, Jaime Llorca, Shanmugam, Karthikeyan, Ji, Mingyue, Tulino, ANTONIA MARIA, Llorca, Jaime, and Alexandros G., D. i. m. a. k. i. s.
- Subjects
Discrete mathematics ,Theoretical computer science ,Multicast ,Network packet ,Computer science ,05 social sciences ,050801 communication & media studies ,020206 networking & telecommunications ,02 engineering and technology ,Library and Information Sciences ,Upper and lower bounds ,Coding gain ,Computer Science Applications ,0508 media and communications ,0202 electrical engineering, electronic engineering, information engineering ,Algorithm design ,Cache ,Clique cover ,Information Systems - Abstract
We study a noiseless broadcast link serving $K$ users whose requests arise from a library of $N$ files. Every user is equipped with a cache of size $M$ files each. It has been shown that by splitting all the files into packets and placing individual packets in a random independent manner across all the caches prior to any transmission, at most $N/M$ file transmissions are required for any set of demands from the library. The achievable delivery scheme involves linearly combining packets of different files following a greedy clique cover solution to the underlying index coding problem. This remarkable multiplicative gain of random placement and coded delivery has been established in the asymptotic regime when the number of packets per file $F$ scales to infinity. The asymptotic coding gain obtained is roughly $t=KM/N$ . In this paper, we initiate the finite-length analysis of random caching schemes when the number of packets $F$ is a function of the system parameters $M,N$ , and $K$ . Specifically, we show that the existing random placement and clique cover delivery schemes that achieve optimality in the asymptotic regime can have at most a multiplicative gain of 2 even if the number of packets is exponential in the asymptotic gain $t=K({M}/{N})$ . Furthermore, for any clique cover-based coded delivery and a large class of random placement schemes that include the existing ones, we show that the number of packets required to get a multiplicative gain of $({4}/{3})g$ is at least $O(({g}/{K})(N/M)^{g-1})$ . We design a new random placement and an efficient clique cover-based delivery scheme that achieves this lower bound approximately. We also provide tight concentration results that show that the average (over the random placement involved) number of transmissions concentrates very well requiring only a polynomial number of packets in the rest of the system parameters.
- Published
- 2016
21. The effect of decision delay in finite-length decision feedback equalization
- Author
-
Voois, Paul A., Lee, Inkyu, and Cioffi, John M.
- Subjects
Equalizers (Electronics) -- Design and construction ,Feedback control systems -- Design and construction ,Digital filters -- Design and construction - Published
- 1996
22. A Scaling Law to Predict the Finite-Length Performance of Spatially-Coupled LDPC Codes
- Author
-
Pablo M. Olmos and Rudiger Urbanke
- Subjects
FOS: Computer and information sciences ,Scaling law ,Asymptotic analysis ,Computer science ,Information Theory (cs.IT) ,Computer Science - Information Theory ,Library and Information Sciences ,Binary erasure channel ,Computer Science Applications ,Transmission (telecommunications) ,Probability of error ,Line (geometry) ,Range (statistics) ,Statistical physics ,Low-density parity-check code ,Information Systems - Abstract
Spatially-coupled LDPC codes are known to have excellent asymptotic properties. Much less is known regarding their finite-length performance. We propose a scaling law to predict the error probability of finite-length spatially-coupled ensembles when transmission takes place over the binary erasure channel. We discuss how the parameters of the scaling law are connected to fundamental quantities appearing in the asymptotic analysis of these ensembles and we verify that the predictions of the scaling law fit well to the data derived from simulations over a wide range of parameters. The ultimate goal of this line of research is to develop analytic tools for the design of spatially-coupled LDPC codes under practical constraints.
- Published
- 2015
23. On the Finite Length Scaling of$q$-Ary Polar Codes
- Author
-
Goldin, Dina, primary and Burshtein, David, additional
- Published
- 2018
- Full Text
- View/download PDF
24. Finite-Length Analysis of Caching-Aided Coded Multicasting.
- Author
-
Shanmugam, Karthikeyan, Ji, Mingyue, Tulino, Antonia M., Llorca, Jaime, and Dimakis., Alexandros G.
- Subjects
- *
MULTICASTING (Computer networks) , *MULTICASTING protocols , *FINITE difference method , *FINITE differences , *GEOMETRICAL drawing - Abstract
We study a noiseless broadcast link serving K users whose requests arise from a library of N files. Every user is equipped with a cache of size M files each. It has been shown that by splitting all the files into packets and placing individual packets in a random independent manner across all the caches prior to any transmission, at most N/M file transmissions are required for any set of demands from the library. The achievable delivery scheme involves linearly combining packets of different files following a greedy clique cover solution to the underlying index coding problem. This remarkable multiplicative gain of random placement and coded delivery has been established in the asymptotic regime when the number of packets per file F scales to infinity. The asymptotic coding gain obtained is roughly t=KM/N . In this paper, we initiate the finite-length analysis of random caching schemes when the number of packets F is a function of the system parameters . Furthermore, for any clique cover-based coded delivery and a large class of random placement schemes that include the existing ones, we show that the number of packets required to get a multiplicative gain of ({4}/{3})g is at least O(({g}/{K})(N/M)^{g-1})$ . We design a new random placement and an efficient clique cover-based delivery scheme that achieves this lower bound approximately. We also provide tight concentration results that show that the average (over the random placement involved) number of transmissions concentrates very well requiring only a polynomial number of packets in the rest of the system parameters. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
25. A Scaling Law to Predict the Finite-Length Performance of Spatially-Coupled LDPC Codes.
- Author
-
Olmos, Pablo M. and Urbanke, Rudiger L.
- Subjects
- *
LOW density parity check codes , *SCALING laws (Statistical physics) , *ITERATIVE decoding , *ERROR probability , *DISTRIBUTION (Probability theory) - Abstract
Spatially-coupled low-density parity-check (SC-LDPC) codes are known to have excellent asymptotic properties. Much less is known regarding their finite-length performance. We propose a scaling law to predict the error probability of finite-length spatially coupled code ensembles when transmission takes place over the binary erasure channel. We discuss how the parameters of the scaling law are connected to fundamental quantities appearing in the asymptotic analysis of these ensembles and we verify that the predictions of the scaling law fit well to the data derived from simulations over a wide range of parameters. The ultimate goal of this line of research is to develop analytic tools for the design of SC-LDPC codes under practical constraints. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
26. Finite-Length Linear Schemes for Joint Source-Channel Coding over Gaussian Broadcast Channels with Feedback
- Author
-
Murin, Yonathan, primary, Kaspi, Yonatan, additional, Dabora, Ron, additional, and Gunduz, Deniz, additional
- Published
- 2017
- Full Text
- View/download PDF
27. The Universal LZ77 Compression Algorithm Is Essentially Optimal for Individual Finite-Length $N$-Blocks
- Author
-
Jacob Ziv
- Subjects
Lossless compression ,Adaptive algorithm ,Binary number ,Data_CODINGANDINFORMATIONTHEORY ,Library and Information Sciences ,Pseudorandom binary sequence ,Computer Science Applications ,Asymptotically optimal algorithm ,Algorithm design ,Encoder ,Algorithm ,Information Systems ,Mathematics ,Data compression - Abstract
Consider the case where consecutive blocks of N letters of a semi-infinite individual sequence X over a finite alphabet are being compressed into binary sequences by some one-to-one mapping. No a priori information about X is available at the encoder, which must therefore adopt a universal data-compression algorithm. It is known that there exist a number of asymptotically optimal universal data compression algorithms (e.g., the Lempel-Ziv (LZ) algorithm, context tree algorithm and an adaptive Hufmann algorithm) such that when successively applied to N-blocks then, the best error-free compression for the particular individual sequence X is achieved as N tends to infinity. The best possible compression that may be achieved by any universal data compression algorithm for finite N-blocks is discussed. Essential optimality for the compression of finite-length sequences is defined. It is shown that the LZ77 universal compression of N-blocks is essentially optimal for finite N-blocks. Previously, it has been demonstrated that a universal context tree compression of N blocks is essentially optimal as well.
- Published
- 2009
28. A Finite-Length Algorithm for LDPC Codes Without Repeated Edges on the Binary Erasure Channel
- Author
-
Sarah J. Johnson
- Subjects
Block code ,Computer science ,Error floor ,Concatenated error correction code ,Data_CODINGANDINFORMATIONTHEORY ,Library and Information Sciences ,Binary erasure channel ,Luby transform code ,Upper and lower bounds ,Online codes ,Computer Science Applications ,Binary code ,Tornado code ,Low-density parity-check code ,Erasure code ,Error detection and correction ,Tanner graph ,Algorithm ,Computer Science::Information Theory ,Information Systems - Abstract
This paper considers the performance, on the binary erasure channel, of low-density parity-check (LDPC) codes without repeated edges in their Tanner graphs. A modification to existing finite-length analysis algorithms is presented for these codes.
- Published
- 2009
29. Finite-Length Scaling for Polar Codes.
- Author
-
Hassani, Seyed Hamed, Alishahi, Kasra, and Urbanke, Rudiger L.
- Subjects
- *
CODING theory , *BIT rate , *DATA transmission systems , *PROBABILITY theory , *RANDOM codes (Coding theory) - Abstract
Consider a binary-input memoryless output-symmetric channel \(W\) . Such a channel has a capacity, call it \(I(W)\) , and for any \(R0\) , then the required block-length \(N\) scales in terms of the rate \(R < I(W)\) as \(N \geq {\alpha }/{(I(W)-R)^{\underline {\mu }}}\) , where \(\alpha \) is a positive constant that depends on \(P_{\rm e}\) and \(I(W)\) . We show that \(\underline {\mu } = 3.579\) is a valid choice, and we conjecture that indeed the value of \(\underline {\mu }\) can be improved to \(\underline {\mu }=3.627\) , the parameter for the binary erasure channel. Also, we show that with the same requirement on the sum of Bhattacharyya parameters, the block-length scales in terms of the rate like \(N \leq {\beta }/{(I(W)-R)^{\overline {\mu }}}\) , where \(\beta \) is a constant that depends on \(P_{\rm e}\) and \(I(W)\) , and \(\overline {\mu }=6\) . [ABSTRACT FROM PUBLISHER]
- Published
- 2014
- Full Text
- View/download PDF
30. An Improved Sphere-Packing Bound for Finite-Length Codes Over Symmetric Memoryless Channels
- Author
-
Igal Sason and Gil Wiechman
- Subjects
Discrete mathematics ,Block code ,Hamming bound ,List decoding ,Data_CODINGANDINFORMATIONTHEORY ,Library and Information Sciences ,Upper and lower bounds ,Computer Science Applications ,Combinatorics ,Channel capacity ,Transmission (telecommunications) ,Chapman–Robbins bound ,Decoding error probability ,Error detection and correction ,Decoding methods ,Phase-shift keying ,Computer Science::Information Theory ,Information Systems ,Mathematics - Abstract
This paper derives an improved sphere-packing (ISP) bound for finite-length error-correcting codes whose transmission takes place over symmetric memoryless channels, and the codes are decoded with an arbitrary list decoder. We first review classical results, i.e., the 1959 sphere-packing (SP59) bound of Shannon for the Gaussian channel, and the 1967 sphere-packing (SP67) bound of Shannon et al. for discrete memoryless channels. An improvement on the SP67 bound, as suggested by Valembois and Fossorier, is also discussed. These concepts are used for the derivation of a new lower bound on the error probability of list decoding (referred to as the ISP bound) which is uniformly tighter than the SP67 bound and its improved version. The ISP bound is applicable to symmetric memoryless channels, and some of its applications are presented. Its tightness under maximum-likelihood (ML) decoding is studied by comparing the ISP bound to previously reported upper and lower bounds on the ML decoding error probability, and also to computer simulations of iteratively decoded turbo-like codes. This paper also presents a technique which performs the entire calculation of the SP59 bound in the logarithmic domain, thus facilitating the exact calculation of this bound for moderate to large block lengths without the need for the asymptotic approximations provided by Shannon.
- Published
- 2008
31. On the Nonlinear Complexity and Lempel–Ziv Complexity of Finite Length Sequences
- Author
-
Nicholas Kalouptsidis, Konstantinos Limniotis, and Nicholas Kolokotronis
- Subjects
Discrete mathematics ,Average-case complexity ,Parameterized complexity ,Library and Information Sciences ,Computer Science::Other ,Computer Science Applications ,PH ,Complexity index ,Structural complexity theory ,Asymptotic computational complexity ,Circuit complexity ,Computer Science::Data Structures and Algorithms ,Algorithm ,Time complexity ,Computer Science::Information Theory ,Information Systems ,Mathematics - Abstract
The nonlinear complexity of binary sequences and its connections with Lempel-Ziv complexity is studied in this paper. A new recursive algorithm is presented, which produces the minimal nonlinear feedback shift register of a given binary sequence. Moreover, it is shown that the eigenvalue profile of a sequence uniquely determines its nonlinear complexity profile, thus establishing a connection between Lempel-Ziv complexity and nonlinear complexity. Furthermore, a lower bound for the Lempel-Ziv compression ratio of a given sequence is proved that depends on its nonlinear complexity.
- Published
- 2007
32. Finite-length analysis of low-density parity-check codes on the binary erasure channel
- Author
-
Rudiger Urbanke, Changyan Di, I.E. Telatar, D. Proietti, and Thomas Richardson
- Subjects
Block code ,Discrete mathematics ,Data_CODINGANDINFORMATIONTHEORY ,Library and Information Sciences ,Binary erasure channel ,Online codes ,Computer Science Applications ,Erasure ,Tornado code ,Low-density parity-check code ,Erasure code ,Decoding methods ,Computer Science::Information Theory ,Information Systems ,Mathematics - Abstract
In this paper, we are concerned with the finite-length analysis of low-density parity-check (LDPC) codes when used over the binary erasure channel (BEC). The main result is an expression for the exact average bit and block erasure probability for a given regular ensemble of LDPC codes when decoded iteratively. We also give expressions for upper bounds on the average bit and block erasure probability for regular LDPC ensembles and the standard random ensemble under maximum-likelihood (ML) decoding. Finally, we present what we consider to be the most important open problems in this area.
- Published
- 2002
33. MMSE decision-feedback equalizers: finite-length results
- Author
-
Naofal Al-Dhahir and John M. Cioffi
- Subjects
Approximation theory ,Mathematical optimization ,Finite impulse response ,Equalization (audio) ,Library and Information Sciences ,Computer Science Applications ,Adaptive filter ,Intersymbol interference ,Convergence (routing) ,Algorithm ,Quadrature amplitude modulation ,Computer Science::Information Theory ,Information Systems ,Mathematics ,Cholesky decomposition - Abstract
This paper extends a number of results on the infinite-length minimum-mean-square-error decision Feedback equalizer (MMSE-DFE) reported by Cioffi, Dudevoir, Eyuboglu and Forney (see IEEE Trans. Commun., 1995) to the finite-length case. Cholesky factorization and displacement structure theory are demonstrated to be two powerful analytical tools for analyzing the finite-length MMSE-DFE. Our objective throughout the paper is to establish finite-length analogs of the well-known infinite-length MMSE-DFE results. Similarities and differences between the two cases are examined and delineated. Finally, convergence of our derived finite-length results to their well-established infinite-length counterparts is shown. >
- Published
- 1995
34. The effect of decision delay in finite-length decision feedback equalization
- Author
-
Inkyu Lee, P.A. Voois, and John M. Cioffi
- Subjects
Computer Science::Systems and Control ,Control theory ,Computer science ,Computation ,Equalization (audio) ,Equalizer ,Filter (signal processing) ,Library and Information Sciences ,Algebraic number ,Algebra over a field ,Computer Science::Information Theory ,Computer Science Applications ,Information Systems - Abstract
In this correspondence we derive the finite-length, minimum mean-squared error decision feedback equalizer (MMSE-DFE). We include decision delay as an explicit parameter. Our derivation yields an algebraic interpretation of the effect of decision delay on DFE performance (measured by mean-squared error). It also allows the fast computation of the MMSE-DFE for several different values of both decision delay and the number of feedback taps. Our approach is especially useful for short filter lengths, when the decision delay can significantly affect DFE performance.
- Published
- 1996
35. Finite-Length Analysis of Low-Density Parity-Check Codes on the Binary Erasure Channel.
- Author
-
Changyan Di, Proietti, David, Telatar, I. Emer, Richardson, Thomas J., and Urbanke, Rudiger L.
- Subjects
- *
DECODERS (Electronics) , *FINITE element method , *ITERATIVE methods (Mathematics) - Abstract
Presents a study that analyzed the finite-length of low-density parity-check (LDPC) codes when used over the binary erasure channel. Background on LDPC codes under belief propagation decoding; Comparison of LDPC ensembles between iterative decoding and maximum-likelihood decoding; Information on the characterization of the average bit and block erasure probabilities for a given regular ensemble of LDPC codes.
- Published
- 2002
- Full Text
- View/download PDF
36. Improved Bounds on the Finite Length Scaling of Polar Codes.
- Author
-
Goldin, Dina and Burshtein, David
- Subjects
- *
MATHEMATICAL bounds , *SOURCE code , *CODING theory , *COMMUNICATION , *INFORMATION theory - Abstract
Improved upper bounds on the blocklength required to communicate over binary-input channels using polar codes, below some given error probability, are derived. For that purpose, an improved bound on the number of non-polarizing channels is obtained. The main result is that the blocklength required to communicate reliably scales at most as \(O((I(W)-R)^{-5.702})\) , where \(R\) is the code rate and \(I(W)\) is the symmetric capacity of the channel \(W\) . The results are then extended to polar lossy source coding at rate \(R\) of a source with symmetric distortion-rate function \(D(\cdot )\) . The blocklength required scales at most as \(O((D^{0})^{-5.702})\) , where \(D^{0}\) is the maximal allowed gap between the actual average (or typical) distortion and \(D(R)\) . [ABSTRACT FROM PUBLISHER]
- Published
- 2014
- Full Text
- View/download PDF
37. The Universal LZ77 Compression Algorithm Is Essentially Optimal for Individual Finite-Length $N$-Blocks
- Author
-
Ziv, Jacob, primary
- Published
- 2009
- Full Text
- View/download PDF
38. Spatially Coupled Generalized LDPC Codes: Asymptotic Analysis and Finite Length Scaling
- Author
-
David G. M. Mitchell, Pablo M. Olmos, Michael Lentmaier, Daniel J. Costello, European Commission, and Ministerio de Ciencia, Innovación y Universidades (España)
- Subjects
Block code ,FOS: Computer and information sciences ,Asymptotic analysis ,Spatially coupled codes ,Computer Science - Information Theory ,Finite length scaling ,02 engineering and technology ,Library and Information Sciences ,Electronic mail ,0202 electrical engineering, electronic engineering, information engineering ,Code (cryptography) ,Low-density parity-check code ,Minimum distance ,Generalized LDPC codes ,Mathematics ,Computer Science::Information Theory ,Telecomunicaciones ,Information Theory (cs.IT) ,020206 networking & telecommunications ,Iterative decoding thresholds ,Binary erasure channel ,Computer Science Applications ,Convolutional code ,Algorithm ,Decoding methods ,Information Systems - Abstract
Generalized low-density parity-check (GLDPC) codes are a class of LDPC codes in which the standard single parity check (SPC) constraints are replaced by constraints defined by a linear block code. These stronger constraints typically result in improved error floor performance, due to better minimum distance and trapping set properties, at a cost of some increased decoding complexity. In this paper, we study spatially coupled generalized low-density parity-check (SC-GLDPC) codes and present a comprehensive analysis of these codes, including: (1) an iterative decoding threshold analysis of SC-GLDPC code ensembles demonstrating capacity approaching thresholds via the threshold saturation effect; (2) an asymptotic analysis of the minimum distance and free distance properties of SC-GLDPC code ensembles, demonstrating that the ensembles are asymptotically good; and (3) an analysis of the finite-length scaling behavior of both GLDPC block codes and SC-GLDPC codes based on a peeling decoder (PD) operating on a binary erasure channel (BEC). Results are compared to GLDPC block codes, and the advantages and disadvantages of SC-GLDPC codes are discussed., Comment: Revised version submitted to the IEEE Transactions on Information Theory
- Full Text
- View/download PDF
39. Minimum weight convolutional codewords of finite length (Corresp.)
- Author
-
C. Weber and G. Huth
- Subjects
Discrete mathematics ,Minimum distance ,Minimum weight ,Data_CODINGANDINFORMATIONTHEORY ,Library and Information Sciences ,Body weight ,Linear code ,Upper and lower bounds ,Computer Science Applications ,Binary convolutional codes ,Combinatorics ,Convolutional code ,Distance parameter ,Computer Science::Information Theory ,Information Systems ,Mathematics - Abstract
For convolutional codes, thc variation of the minimum distance between nonmerging codewords with the lengths of those codewords is considered for all finite lengths. This is carried out in terms of a new distance parameter for convolutional codes do, the minimum average weight per branch over all cycles. Upper and lower bounds on do for binary convolutional codes of rate 1/n are presented. The tradeoff between d_{o} and the free distance d_{free} is obtained for small memory length codes.
- Published
- 1976
40. On upper bounds for error detecting and error correcting codes of finite length
- Author
-
N. Wax
- Subjects
Combinatorics ,Unit cube ,Concatenated error correction code ,Forward error correction ,Library and Information Sciences ,Low-density parity-check code ,Linear code ,Upper and lower bounds ,Hamming code ,Coding gain ,Computer Science Applications ,Information Systems ,Mathematics - Abstract
Upper bounds for error detecting and error correcting codes are obtained in this paper. One upper bound is found by exploiting the geometrical model of coding introduced by Hamming. The volume of an appropriate geometrical body is compared with the volume of the unit cube, in getting the first upper bound. An improvement on this upper bound can be found by introducing a mass density function, and comparing the mass of the body with the mass of the unit cube. A comparison is made with known upper bounds, and with best codes found thus far. The improved upper bound given here is frequently somewhat smaller than previously known upper bounds.
- Published
- 1959
41. Minimum weight convolutional codewords of finite length (Corresp.)
- Author
-
Huth, G., primary and Weber, C., additional
- Published
- 1976
- Full Text
- View/download PDF
42. On upper bounds for error detecting and error correcting codes of finite length
- Author
-
Wax, N., primary
- Published
- 1959
- Full Text
- View/download PDF
43. LDPC Codes Over the $q$ -ary Multi-Bit Channel.
- Author
-
Cohen, Rami, Raviv, Netanel, and Cassuto, Yuval
- Subjects
LOW density parity check codes ,ERROR rates ,MAGNITUDE (Mathematics) ,COMPUTER storage devices ,TWO-dimensional bar codes ,DECODING algorithms ,ITERATIVE decoding - Abstract
In this paper, we introduce a new channel model termed as the $q$ -ary multi-bit channel. This channel models a memory device, where $q$ -ary symbols ($q=2^{s}$) are stored in the form of current/voltage levels. The symbols are read in a measurement process, which provides a symbol bit in each measurement step, starting from the most significant bit. An error event occurs when not all the symbol bits are known. To deal with such error events, we use GF($q$) low-density parity-check (LDPC) codes and analyze their decoding performance. We start with iterative-decoding threshold analysis and derive optimal edge-label distributions for maximizing the decoding threshold. We later move to a finite-length iterative-decoding analysis and propose an edge-labeling algorithm for the improved decoding performance. We then provide a finite-length maximum-likelihood decoding analysis for both the standard non-binary random ensemble and LDPC ensembles. Finally, we demonstrate by simulations that the proposed edge-labeling algorithm improves the finite-length decoding performance by orders of magnitude. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
44. Further Investigations on Nonlinear Complexity of Periodic Binary Sequences
- Author
-
Yuan, Qin, Li, Chunlei, Zeng, Xiangyong, Helleseth, Tor, and He, Debiao
- Abstract
Nonlinear complexity is an important measure for assessing the randomness of sequences. In this paper we investigate how circular shifts affect the nonlinear complexities of finite-length binary sequences and then reveal a more explicit relation between nonlinear complexities of finite-length binary sequences and their corresponding periodic sequences. Based on the relation, we propose two algorithms that can generate all periodic binary sequences with any prescribed nonlinear complexity.
- Published
- 2024
- Full Text
- View/download PDF
45. Error-Correction Performance of Regular Ring-Linear LDPC Codes Over Lee Channels
- Author
-
Bariffi, Jessica, Bartz, Hannes, Liva, Gianluigi, and Rosenthal, Joachim
- Abstract
Most low-density parity-check (LDPC) code constructions are considered over finite fields. In this work, we focus on regular LDPC codes over integer residue rings and analyze their performance with respect to the Lee metric. Their error-correction performance is studied over two channel models, in the Lee metric. The first channel model is a discrete memoryless channel, whereas in the second channel model an error vector is drawn uniformly at random from all vectors of a fixed Lee weight. It is known that the two channel laws coincide in the asymptotic regime, meaning that their marginal distributions match. For both channel models, we derive upper bounds on the block error probability in terms of a random coding union bound as well as sphere packing bounds that make use of the marginal distribution of the considered channels. We estimate the decoding error probability of regular LDPC code ensembles over the channels using the marginal distribution and determining the expected Lee weight distribution of a random LDPC code over a finite integer ring. By means of density evolution and finite-length simulations, we estimate the error-correction performance of selected LDPC code ensembles under belief propagation decoding and a low-complexity symbol message passing decoding algorithm and compare the performances. The analysis developed in this paper may serve to design regular low-density parity-check (LDPC) codes over integer residue rings for storage and cryptographic application.
- Published
- 2024
- Full Text
- View/download PDF
46. On Codes Achieving Zero Error Capacities in Limited Magnitude Error Channels.
- Author
-
Bose, Bella, Elarief, Noha, and Tallini, Luca G.
- Subjects
ERROR probability ,CODING theory ,DECODING algorithms ,ERROR analysis in mathematics ,MATHEMATICAL bounds - Abstract
Shannon in his 1956 seminal paper introduced the concept of the zero error capacity, C0 , of a noisy channel. This is defined as the least upper bound of rates, at which, it is possible to transmit information with zero probability of error. At present not many codes are known to achieve the zero error capacity. In this paper, some codes which achieve zero error capacities in limited magnitude error channels are described. The code lengths of these zero error capacity achieving codes can be of any finite length $n=1,2,\ldots$ , in contrast to the long lengths required for the known regular capacity achieving codes, such as turbo codes, LDPC codes, and polar codes. Both wrap around and non-wrap around limited magnitude error models are considered in this paper. For non-wrap around error model, the exact value of zero error capacities is derived, and optimal non-systematic and systematic codes are designed. The non-systematic codes achieve the zero error capacity with any finite length. The optimal systematic codes achieve the systematic zero error capacity of the channel, which is defined as the zero error capacity with the additional requirements that the communication must be carried out with a systematic code. It is also shown that the rates of the proposed systematic codes are equal to or approximately equal to the zero error capacity of the channel. For the wrap around model bounds are derived for the zero error capacity and in many cases the bounds give the exact value. In addition, optimal wrap around non-systematic and systematic codes are developed which either achieve or are close to achieving the zero error capacity with finite length. [ABSTRACT FROM PUBLISHER]
- Published
- 2018
- Full Text
- View/download PDF
47. Maximum-Order Complexity and 2-Adic Complexity
- Author
-
Chen, Zhiru, Chen, Zhixiong, Obrovsky, Jakob, and Winterhof, Arne
- Abstract
The 2-adic complexity has been well-analyzed in the periodic case. However, we are not aware of any theoretical results in the aperiodic case. In particular, the Nth 2-adic complexity has not been studied for any promising candidate of a pseudorandom sequence of finite length N. Also nothing seems be known for a part of the period of length N of any cryptographically interesting periodic sequence. Here we introduce the first method for this aperiodic case. More precisely, we study the relation between Nth maximum-order complexity and Nth 2-adic complexity of binary sequences and prove a lower bound on the Nth 2-adic complexity in terms of the Nth maximum-order complexity. Then any known lower bound on the Nth maximum-order complexity implies a lower bound on the Nth 2-adic complexity of the same order of magnitude. In the periodic case, one can prove a slightly better result. The latter bound is sharp, which is illustrated by the maximum-order complexity of
$\ell $ - Published
- 2024
- Full Text
- View/download PDF
48. Rateless Codes With Unequal Error Protection Property.
- Author
-
Rahnavard, Nazanin, Vellambi, Badri N., and Fekri, Faramarz
- Subjects
- *
CODING theory , *DECODERS & decoding , *ASYMPTOTIC expansions , *ERROR-correcting codes , *PROBABILITY theory , *MAXIMUM likelihood statistics - Abstract
In this correspondence, a generalization of rateless codes is proposed. The proposed codes provide unequal error protection (UEP). The asymptotic properties of these codes under the iterative decoding are investigated. Moreover, upper and lower bounds on maximum-likelihood (ML) decoding error probabilities of finite-length LT and Rapt or codes for both equal and unequal error protection schemes are derived. Further, our work is verified with simulations. Simulation results indicate that the pro. posed codes provide desirable UEP. We also note that the UEP property does not impose a considerable drawback on the overall performance of the codes. Moreover, we discuss that the proposed codes can provide unequal recovery time (URT). This means that given a target bit error rate, different parts of information bits can be decoded after receiving different amounts of encoded bits. This implies that the information bits can be recovered in a progressive manner. This URT property may be used for sequential data recovery in video/audio streaming. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
49. Blind OFDM Channel Estimation Using FIR Constraints: Reduced Complexity and Identifiability.
- Author
-
Seongwook Song and Singer, Andrew C.
- Subjects
- *
CONSTRAINED optimization , *ALGORITHMS , *ALPHABET , *MULTIPLEXING , *DATA transmission systems , *SIGNAL-to-noise ratio , *MATHEMATICAL optimization , *SIMULATION methods & models , *SYSTEM analysis - Abstract
In this correspondence, blind channel estimators exploiting finite alphabet constraints are discussed for orthogonal frequency-division multiplexing (OFDM) systems. Considering the channel and data jointly, a joint maximum-likelihood (JML) algorithm is described, along with identifiability conditions in the noise-free case. This approach enables development of general identifiability conditions for the minimum-distance (MD) finite alphabet blind algorithm of Zhou and Giannakis. Both the JML and MD algorithms suffer from high numerical complexity, as they rely on exhaustive search methods to resolve a large number of ambiguities. We present a substantially more efficient blind algorithm, the reduced complexity minimum distance (RMD) algorithm, by exploiting properties of the assumed finite-length impulse response (FIR) channel. The RMD algorithm exploits constraints on the unwrapped phase of FIR systems and results in significant reductions in numerical complexity over existing methods. In many cases, the RMD approach is able to completely eliminate the exhaustive search of the JML and MD approaches, while providing channel estimates of the same quality. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
50. Optimum Overflow Thresholds in Variable-Length Source Coding Allowing Non-Vanishing Error Probability.
- Author
-
Nomura, Ryo and Yagi, Hideki
- Subjects
SOURCE code ,CODING theory ,ERROR probability ,CHANNEL coding ,BOOLEAN functions ,PROBABILITY theory - Abstract
The variable-length source coding problem allowing the error probability up to some constant is considered for general sources. In this problem, the optimum mean codeword length of variable-length codes has already been determined. On the other hand, in this paper, we focus on the overflow (or excess codeword length) probability instead of the mean codeword length. The infimum of overflow thresholds under the constraint that both of the error probability and the overflow probability are smaller than or equal to some constant is called the optimum overflow threshold. In this paper, we first derive finite-length upper and lower bounds on these probabilities so as to analyze the optimum overflow thresholds. Then, by using these bounds, we determine the general formula of the optimum overflow thresholds in both of the first-order and second-order forms. Next, we consider another expression of the derived general formula so as to reveal the relationship with the optimum coding rate in the fixed-length source coding problem. Finally, we apply the general formula derived in this paper to the case of stationary memoryless sources. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.