60 results on '"*PARITY-check matrix"'
Search Results
2. A Quasi-Cyclic LDPC Code for GNSS Signal
- Author
-
Yang, Yi, Liu, Changjian, Zhang, Xiaoqing, Sun, Jiadong, editor, Jiao, Wenhai, editor, Wu, Haitao, editor, and Shi, Chuang, editor
- Published
- 2013
- Full Text
- View/download PDF
3. Z2Z4Z8-Cyclic codes
- Author
-
Aydogdu, Ismail and Gursoy, Fatmanur
- Published
- 2019
- Full Text
- View/download PDF
4. An 8-Cycle Construction Scheme for Latin Square LDLC
- Author
-
Xie Feng and Zhao Danfeng
- Subjects
Combinatorics ,Discrete mathematics ,Parity-check matrix ,Latin square ,Lattice (order) ,Low density ,Adjacency matrix ,Permutation matrix ,Symbol error rate ,Mathematics - Abstract
A novel method for constructing the Latin square Low Density Lattice Codes (LDLC) parity check matrix is proposed in this paper. Based on the permutation matrix, a 6-cycle matrix is build firstly. Then the elements involved in the 6-cycle are detected and swapped, which is on the basis of the theory of adjacency matrix. Therefore, we obtain a check matrix, which is 6-cycle free and the nonzero elements obeying random distribution. Simulation results demonstrate that the proposed method has a lower symbol error rate (SER) compared with the previous works.
- Published
- 2016
5. Secure Computation from Leaky Correlated Randomness
- Author
-
Divya Gupta, Yuval Ishai, Hemanta K. Maji, and Amit Sahai
- Subjects
Parity-check matrix ,Theoretical computer science ,Oblivious transfer ,Computer science ,business.industry ,Computation ,Leakage rate ,Secure multi-party computation ,Context (language use) ,Cryptography ,business ,Randomness - Abstract
Correlated secret randomness is an essential resource for information-theoretic cryptography. In the context of secure two-party computation, the high level of efficiency achieved by information-theoretic protocols has motivated a paradigm of starting with correlated randomness, specifically random oblivious transfer (OT) correlations. This correlated randomness can be generated and stored during an offline preprocessing phase, long before the inputs are known. But what if some information about the correlated randomness is leaked to an adversary or to the other party? Can we still recover “fresh” correlated randomness after such leakage has occurred?
- Published
- 2015
6. Performance Analysis of Error-Correcting Binary Decision Diagrams
- Author
-
Helena Astola, Stanislav Stankovi, and Jaakko Astola
- Subjects
Matrix (mathematics) ,Parity-check matrix ,Theoretical computer science ,Binary decision diagram ,Computer science ,Decision tree ,Influence diagram ,Block diagram ,Algorithm ,Linear code ,Decoding methods - Abstract
Decision diagrams are an efficient way of representing switching functions and they are easily mapped to technology. A layout of a circuit is directly determined by the shape and the complexity of the decision diagram. By combining the theory of error-correcting codes with decision diagrams, it is possible to form robust circuit layouts, which can detect and correct errors. The method of constructing robust decision diagrams is analogous to the decoding process of linear codes, and is based on simple matrix and look-up operations. In this paper, the performance of robust binary decision diagrams is analyzed by determining the error probabilities for such constructions. Depending on the error-correcting properties of the code used in the construction, the error probability of a circuit can be significantly decreased by a robust decision diagram.
- Published
- 2012
7. High Speed LDPC Encoder Architecture for Digital Video Broadcasting Systems
- Author
-
Ji-Won Jung, Deock-Gil Oh, In-Ki Lee, and Min-Hyuk Kim
- Subjects
Parity-check matrix ,Computer science ,business.industry ,Digital Video Broadcasting ,Real-time computing ,Data_CODINGANDINFORMATIONTHEORY ,Low-density parity-check code ,Field-programmable gate array ,business ,Throughput (business) ,Encoder ,Computer hardware ,Parity bit - Abstract
In this paper, we propose a high-speed LDPC encoder architecture for the DVB-S2 standard. The proposed LDPC encoding architecture is based on parallel 360 bit-wise operations. The key issues for realizing a high speed are two kinds of index addresses and making efficient use of memory. We implemented a half-rate LDPC encoder on an FPGA, and confirmed that its maximum throughput is up to 10 Gbps with a 100 MHz clock.
- Published
- 2012
8. Capacity Achieving Two-Write WOM Codes
- Author
-
Amir Shpilka
- Subjects
Discrete mathematics ,Parity-check matrix ,Encoding (memory) ,Code (cryptography) ,Arithmetic ,Connection (algebraic framework) ,Linear code ,Binary alphabet ,Decoding methods ,Extractor ,Mathematics - Abstract
In this paper we give several new constructions of WOM codes. The novelty in our constructions is the use of the so called Wozencraft ensemble of linear codes. Specifically, we obtain the following results. We give an explicit construction of a two-write Write-Once-Memory (WOM for short) code that approaches capacity, over the binary alphabet. More formally, for every e>0, 0
- Published
- 2012
9. Decoding Algorithms for Random Linear Network Codes
- Author
-
Janus Heide, Morten V. Pedersen, and Frank H. P. Fitzek
- Subjects
Parity-check matrix ,Finite field ,Computer science ,Linear network coding ,List decoding ,Sequential decoding ,Linear code ,Algorithm ,Decoding methods - Abstract
We consider the problem of efficient decoding of a random linear code over a finite field. In particular we are interested in the case where the code is random, relatively sparse, and use the binary finite field as an example. The goal is to decode the data using fewer operations to potentially achieve a high coding throughput, and reduce energy consumption. We use an on-the-fly version of the Gauss-Jordan algorithm as a baseline, and provide several simple improvements to reduce the number of operations needed to perform decoding. Our tests show that the improvements can reduce the number of operations used during decoding with 10-20% on average depending on the code parameters.
- Published
- 2011
10. LDPC Equalizer for Compensating the CFO and Phase Noise in OFDM System
- Author
-
Do-Hoon Kim and Heung-Gyoon Ryu
- Subjects
Channel capacity ,Parity-check matrix ,Mean squared error ,Computer science ,Orthogonal frequency-division multiplexing ,Phase noise ,Adaptive equalizer ,Data_CODINGANDINFORMATIONTHEORY ,Low-density parity-check code ,Performance improvement ,Algorithm ,Computer Science::Information Theory - Abstract
In this paper, we propose decision feedback equalizer based on LDPC (Low Density Parity Check) code for fast processing and performance improvement. LDPC code has good error correcting ability and performance approaching Shannon capacity. However, it has more long parity check matrix and the number of iteration. In proposed system, MSE (mean square error) of signal between decision device and decoder is fed back to equalizer. If we use this method, proposed system can improve performance because it corrects estimated channel response more accurately. In addition, proposed system can reduce complexity because it has a lower number of iterations than system that does not include feedback at same performance. We evaluate performance of OFDM system considered CFO and phase noise in multipath channel via simulation.
- Published
- 2011
11. Index to Constant Weight Codeword Converter
- Author
-
Jon T. Butler and Tsutomu Sasao
- Subjects
Discrete mathematics ,Theoretical computer science ,Monte Carlo method ,Code word ,TheoryofComputation_GENERAL ,Binary number ,Data_CODINGANDINFORMATIONTHEORY ,Parity-check matrix ,Combinatorial number system ,Hardware_ARITHMETICANDLOGICSTRUCTURES ,Boolean function ,Mathematics ,Coding (social sciences) ,Electronic circuit - Abstract
A constant weight codeword is a binary n-tuple with exactly r 1's. We show two circuits that generate constant weight codewords. The first is based on the combinatorial number system. Its input is an index to the codeword. That is, there are (n r) n-bit codewords with exactly r 1's. The index generates a unique codeword, and is a binary number between 0 and (n r) -1. Such a circuit is useful for encoding data. If a random constant weight codeword is needed, as in Monte Carlo simulations, then the index is random. If a random constant weight codeword only is needed, then our other circuit is even more compact. It is based on a trellis configuration. Both designs can be pipelined to produce one constant weight codeword per clock period. We give experimental results showing the efficiency of our designs on the SRC-6 reconfigurable computer.
- Published
- 2011
12. Low Complexity Decoding Algorithm for Nonbinary LDPC Codes
- Author
-
Lian Huang, Wei Li, and Xue-Fei Yang
- Subjects
Computer Science::Hardware Architecture ,Parity-check matrix ,Berlekamp–Welch algorithm ,Concatenated error correction code ,List decoding ,Data_CODINGANDINFORMATIONTHEORY ,Serial concatenated convolutional codes ,Sequential decoding ,Low-density parity-check code ,Algorithm ,Decoding methods ,Computer Science::Information Theory ,Mathematics - Abstract
Low complexity decoding algorithm is proposed to reduce the complexity of decoding algorithm for nonbinary quasi-cyclic LDPC codes. The algorithm uses methods of logarithm domain and look-up table based on the FFT-QSPA algorithm, avoiding multiplication and division calculations. These calculations make the hardware compute slowly. As a result, the algorithm makes the hardware easier to realize. The parity check matrices with quasi-cyclic form are constructed based on the finite fields, which are benefit for linear encoding based on the feedback shift registers. The letter presents a scheme combined with the constitution, coding and decoding of nonbinary quasi-cyclic LDPC codes with low complexity. The simulation shows that nonbinary quasi-cyclic LDPC codes achieve significant coding gains over RS codes with lower complexity and better performance.
- Published
- 2011
13. Practical Power Analysis Attacks on Software Implementations of McEliece
- Author
-
Amir Moradi, Stefan Heyse, and Christof Paar
- Subjects
Computer science ,business.industry ,Cryptography ,Computer security ,computer.software_genre ,Power analysis ,Parity-check matrix ,Computer engineering ,Robustness (computer science) ,McEliece cryptosystem ,Goppa code ,Cryptosystem ,business ,computer ,Decoding methods - Abstract
The McEliece public-key cryptosystem is based on the fact that decoding unknown linear binary codes is an NP-complete problem. The interest on implementing post-quantum cryptographic algorithms, e.g. McEliece, on microprocessor-based platforms has been extremely raised due to the increasing storage space of these platforms. Therefore, their vulnerability and robustness against physical attacks, e.g., state-of-the-art power analysis attacks, must be investigated. In this work, we address mainly two power analysis attacks on various implementations of McEliece on an 8-bit AVR microprocessor. To the best of our knowledge, this is the first time that such side-channel attacks are practically evaluated.
- Published
- 2010
14. Applications of Adaptive Belief Propagation Decoding for Long Reed-Solomon Codes
- Author
-
Dang Hai Pham, Tomohisa Wada, and Zhian Zheng
- Subjects
Parity-check matrix ,Berlekamp–Welch algorithm ,Reed–Solomon error correction ,Code (cryptography) ,List decoding ,Data_CODINGANDINFORMATIONTHEORY ,Sequential decoding ,Belief propagation ,Algorithm ,Decoding methods ,Computer Science::Information Theory ,Mathematics - Abstract
Reed-Solomon (204,188) code has been widely used in many digital multimedia broadcasting systems. This paper focuses on the low complexity derivation of adaptive belief propagation bit-level soft decision decoding for this code. Simulation results demonstrate that proposed normalized min-sum algorithm as belief propagation (BP) process provides the same decoding performance in terms of packet-error-rate as sum-product algorithm. An outer adaptation scheme by moving adaptation process of parity check matrix out of the BP iteration loop is also introduced to reduce decoding complexity. Simulation results show that the proposed two schemes perform a good trade-off between the decoding performance and decoding complexity.
- Published
- 2010
15. Low Density Parity Check Code for the Single Carrier Frequency Division Multiple Access
- Author
-
Jinsang Kim and Mohammad Rakibul Islam
- Subjects
Carrier signal ,Theoretical computer science ,Computer science ,ComputerSystemsOrganization_COMPUTER-COMMUNICATIONNETWORKS ,Data_CODINGANDINFORMATIONTHEORY ,Division (mathematics) ,Term (time) ,Parity-check matrix ,Telecommunications link ,Computer Science::Networking and Internet Architecture ,Code (cryptography) ,Low-density parity-check code ,Algorithm ,Computer Science::Information Theory - Abstract
A new encoding technique for low density parity check (LDPC) code at the uplink of 3rd Generation Partnership Project Long Term Evolution (3GPP LTE) is proposed. The concept of approximate lower triangulation (ALT) is used where the parity check matrix is preprocessed and encoded in O (n ) complexity. This encoding technique is applied in the uplink of LTE system where single carrier frequency division multiple access (SC-FDMA) is used as the multiple access scheme. As the encoding is performed in a O (n ) complexity, it will outperform the existing LDPC encoding which is not used in LTE uplink due to its high encoding complexity. The proposed encoding is simulated in the SCFDMA scenario and the BER curve is shown.
- Published
- 2009
16. On Linear Programming Decoding on a Quantized Additive White Gaussian Noise Channel
- Author
-
Eirik Rosnes
- Subjects
Code word ,Data_CODINGANDINFORMATIONTHEORY ,Binary symmetric channel ,Combinatorics ,symbols.namesake ,Parity-check matrix ,Quantization (physics) ,Additive white Gaussian noise ,symbols ,Low-density parity-check code ,Algorithm ,Decoding methods ,Pairwise error probability ,Computer Science::Information Theory ,Mathematics - Abstract
In this work, we consider the pairwise error probability (PEP) of a linear programming (LP) decoder for a general binary linear code as formulated by Feldman et al. (IEEE Trans. Inf. Theory, March 2005) on a quantized additive white Gaussian noise (AWGN) channel. With a quantized AWGN (QAWGN) channel, we mean a channel where we first compute log-likelihood ratios as for an AWGN channel and then quantize them. Let H be a parity-check matrix of a binary linear code and consider LP decoding based on H. The output of the LP decoder is always a pseudo-codeword , of some pseudo-weight , where the definition of pseudo-weight is specific to the underlying channel model. In this work, we give a definition of pseudo-weight for a QAWGN channel based on an asymptotic (high signal-to-noise ratio) analysis of the PEP. Note that with maximum-likelihood decoding, the parameters of the quantization scheme, i.e., the quantization levels and the corresponding quantization region thresholds, that minimize the PEP of wrongly decoding to a non-zero codeword c when the all-zero codeword is transmitted is independent of the specific codeword c. However, this is not the case with LP decoding based on a parity-check matrix H, which means that the quantization scheme needs to be optimized for the given H. As a case study, we consider the well-known (3,5)-regular (155,64,20) Tanner code and estimate its minimum QAWGN pseudo-weight with 3 and 5 levels of quantization, in which the quantization scheme is optimized to maximize the minimum QAWGN pseudo-weight.
- Published
- 2009
17. Towards a Concrete Security Proof of Courtois, Finiasz and Sendrier Signature Scheme
- Author
-
Léonard Dallot
- Subjects
Parity-check matrix ,Theoretical computer science ,Goppa code ,Code (cryptography) ,Concrete security ,Coding theory ,Decoding methods ,Schnorr signature ,Computer Science::Information Theory ,Computer Science::Cryptography and Security ,Mathematics ,Random oracle - Abstract
Courtois, Finiasz and Sendrier proposed in 2001 a practical code-based signature scheme. We give a rigorous security analysis of a modified version of this scheme in the random oracle model. Our reduction involves two problems of coding theory widely considered as difficult, the Goppa Parametrized Bounded Decoding and the Goppa Code Distinguishing.
- Published
- 2008
18. Joint Source-Cryptographic-Channel Coding Based on Linear Block Codes
- Author
-
Haruhiko Kaneko and Eiji Fujiwara
- Subjects
Parity-check matrix ,Theoretical computer science ,Shannon–Fano coding ,Tunstall coding ,Variable-length code ,Data_CODINGANDINFORMATIONTHEORY ,Entropy encoding ,Algorithm ,Coding gain ,Context-adaptive binary arithmetic coding ,Mathematics ,Arithmetic coding - Abstract
This paper proposes a joint coding with three functions: source coding, channel coding, and public-key encryption. A codeword is simply generated as a product of an encoding matrix and a sparse information word. This encoding method has much lower encoding complexity than the conventional coding techniques in which source coding, encryption, and channel coding are successively applied to an information word. The encoding matrix is generated by using two linear error control codes and randomly generated nonsingular matrices. Encryption is based on the intractableness of factorizing a matrix into randomly constructed factor matrices, and of decoding an error control code defined by a random parity-check matrix. Evaluation shows that the proposed joint coding gives a lower bit error rate and a superior compression ratio than the conventional codings.
- Published
- 2007
19. Exploiting Thread-Level Parallelism of Irregular LDPC Decoder with Simultaneous Multi-threading Technique
- Author
-
Shuming Chen, Dong Wang, and Xing Fang
- Subjects
Parity-check matrix ,Very long instruction word ,Computer science ,Multithreading ,Task parallelism ,Data_CODINGANDINFORMATIONTHEORY ,Parallel computing ,Low-density parity-check code ,Error detection and correction ,Simultaneous multithreading ,Parity bit - Abstract
Irregular LDPC (Low Density Parity Check) code is a powerful error correction code in wireless communication applications. However, irregular LDPC decoder has limited instruction-level parallelism. This paper exploits the thread-level parallelism of irregular LDPC decoders with simultaneous multithreading (SMT) techniques. The simulations with random constructed parity check matrixes under different signal-to-noise ratios and three block lengths show that it can attain 16.7%-45.3% performance improvement by SMT technique with the area cost increasing by about 17.73%, which supposes that SMT is an efficient technique to improve the performance of irregular LDPC decoders.
- Published
- 2007
20. Practical Quantum Oblivious Transfer
- Author
-
Marie-Hélène Skubiszewska, Charles H. Bennett, Claude Crépeau, and Gilles Brassard
- Subjects
Value (ethics) ,Parity-check matrix ,Theoretical computer science ,Quantum cryptography ,Oblivious transfer ,Computer science ,String (computer science) ,Commitment scheme ,Quantum ,Protocol (object-oriented programming) - Abstract
We describe a protocol for quantum oblivious transfer, utilizing faint pulses of polarized light, by which one of two mutually distrustful parties ("Alice") transmits two one-bit messages in such a way that the other party ("Bob") can choose which message he gets but cannot obtain information about both messages (he will learn his chosen bit's value with exponentially small error probability and may gain at most exponentially little information about the value of the other bit), and Alice will be entirely ignorant of which bit he received. Neither party can cheat (ie deviate from the protocol while appearing to follow it) in such a way as to obtain more information than what is given by the description of the protocol. Our protocol is easy to modify in order to implement the All-or-Nothing Disclosure of one out of two string messages, and it can be used to implement bit commitment and oblivious circuit evaluation without complexity-theoretic assumptions, in a way that remains secure even against cheaters that have unlimited computing power. Moreover, this protocol is practical in that it can be realized with available opto-electronic apparatus while being immune to any technologically feasible attack for the foreseeable future.
- Published
- 2007
21. Attacks on Protocols for Server-Aided RSA Computation
- Author
-
Birgit Pfitzmann and Michael Waidner
- Subjects
Protocol (science) ,Correctness ,Speedup ,Card holder ,Computer science ,business.industry ,Computation ,Computer security ,computer.software_genre ,Parity-check matrix ,Smart card ,business ,Chinese remainder theorem ,computer - Abstract
On Crypto '88, Matsumoto, Kato, and Imai presented protocols to speed up secret computations with insecure auxiliary devices. The two most important protocols enable a smart card to compute the secret RSA operation faster with the help of a server that is not necessarily trusted by the card holder. It was stated that if RSA is secure, the protocols could only be broken by exhaustive scarch in certain spacts. Our main attacks show that much smaller search spaces suffice. These attacks are passive and therefore undetectable. It was already known that one of the protocols is vulnerable to active attacks. We show that this holds for the other protocol, too. More importantly, we show that our attack may still work if the smart card checks the correctness of the result; this was previously believed to be can easy measure excluding all active attacks. Finally, we discuss attach on related protocols.
- Published
- 2007
22. New Traceability Codes Against a Generalized Collusion Attack for Digital Fingerprinting
- Author
-
Shigeichi Hirasawa, Hideki Yagi, and Toshiyasu Matsushima
- Subjects
Parity-check matrix ,Traceability ,Computer science ,Digital content ,Data_MISCELLANEOUS ,Collusion ,Code (cryptography) ,ComputingMilieux_LEGALASPECTSOFCOMPUTING ,State (computer science) ,Computer security ,computer.software_genre ,computer - Abstract
In this paper, we discuss collusion-secure traceability codes for digital fingerprinting which is a technique for copyright protection of digital contents. We first state a generalization of conventional collusion attacks where illicit users of a digital content collude to create an illegal digital content. Then we propose a collusion-secure traceability code which can detect at least one colluder against it. We show the rate and properties of the proposed traceability code.
- Published
- 2007
23. How Can Reed-Solomon Codes Improve Steganographic Schemes?
- Author
-
Fabien Galand and Caroline Fontaine
- Subjects
Berlekamp–Welch algorithm ,Computer science ,List decoding ,020206 networking & telecommunications ,020207 software engineering ,Data_CODINGANDINFORMATIONTHEORY ,02 engineering and technology ,Linear code ,Parity-check matrix ,Reed–Solomon error correction ,0202 electrical engineering, electronic engineering, information engineering ,Algorithm ,Decoding methods ,BCH code - Abstract
The use of syndrome coding in steganographic schemes tends to reduce distortion during embedding. The more complete model comes from the wet papers [FGLS05] which allow to lock positions that cannot be modified. Recently, BCH codes have been investigated, and seem to be good candidates in this context [SW06]. Here, we show that Reed-Solomon codes are twice better with respect to the number of locked positions and that, in fact, they are optimal. We propose two methods for managing these codes in this context: the first one is based on a naive decoding process through Lagrange interpolation; the second one, more efficient, is based on list decoding techniques and provides an adaptive trade-off between the number of locked positions and the embedding efficiency.
- Published
- 2007
24. Reducing the Complexity of Syndrome Coding for Embedding
- Author
-
Antje Winkler and Dagmar Schönfeld
- Subjects
Parity-check matrix ,Theoretical computer science ,Polynomial code ,Computer science ,Embedding ,Data_CODINGANDINFORMATIONTHEORY ,BCH code ,Coding (social sciences) - Abstract
This paper deals with strategies to dramatically reduce the complexity for embedding based on syndrome coding. In contrast to existing approaches, our goal is to keep the embedding efficiency constant, i.e., to embed less complexly without increasing the average number of embedding changes, compared to the classic Matrix Embedding scenario. Generally, our considerations are based on structured codes, especially on BCH Codes. However, they are not limited to this class of codes. We propose different approaches to reduce embedding complexity concentrating on both syndrome coding based on a parity check matrix and syndrome coding based on the generator polynomial.
- Published
- 2007
25. A General Framework for Applying FGLM Techniques to Linear Codes
- Author
-
Edgar Martínez-Moro, M. Borges-Quintana, and M. A. Borges-Trenard
- Subjects
Discrete mathematics ,Algebra ,Parity-check matrix ,Gröbner basis ,Representative element ,Mathematics::Commutative Algebra ,Computer Science::Symbolic Computation ,Canonical form ,Binary code ,Linear code ,Mathematics ,Coding (social sciences) - Abstract
We show herein that a pattern based on FGLM techniques can be used for computing Grobner bases, or related structures, associated to linear codes. This Grobner bases setting turns out to be strongly related to the combinatorics of the codes.
- Published
- 2006
26. Tumor Classification from Gene Expression Data: A Coding-Based Multiclass Learning Approach
- Author
-
Alexander Hüntemann, Elizabeth Tapia, and José Carlos González
- Subjects
Block code ,business.industry ,Computer science ,Pattern recognition ,Coding theory ,Machine learning ,computer.software_genre ,Linear code ,Support vector machine ,Parity-check matrix ,ComputingMethodologies_PATTERNRECOGNITION ,Robustness (computer science) ,Artificial intelligence ,Low-density parity-check code ,business ,computer ,Coding (social sciences) - Abstract
The effectiveness of cancer treatment depends strongly on an accurate diagnosis. In this paper we propose a system for automatic and precise diagnosis of a tumor's origin based on genetic data. This system is based on a combination of coding theory techniques and machine learning algorithms. In particular, tumor classification is described as a multiclass learning setup, where gene expression values serve the system to distinguish between types of tumors. Since multiclass learning is intrinsically complex, the data is divided into several biclass problems whose results are combined with an error correcting linear block code. The robustness of the prediction is increased as errors of the base binary classifiers are corrected by the linear code. Promising results have been achieved with a best case precision of 72% when the system was tested on real data from cancer patients.
- Published
- 2005
27. Efficient Wet Paper Codes
- Author
-
David Soukal, Jessica Fridrich, and Miroslav Goljan
- Subjects
Block code ,Parity-check matrix ,Theoretical computer science ,Steganography ,Computer science ,Embedding ,Data_CODINGANDINFORMATIONTHEORY ,Erasure code ,GeneralLiterature_MISCELLANEOUS ,Communication channel - Abstract
Wet paper codes were proposed as a tool for constructing steganographic schemes with an arbitrary selection channel that is not shared between the sender and the recipient. In this paper, we describe new approaches to wet paper codes that enjoy low computational complexity and improved embedding efficiency (number of message bits per embedding change). Some applications of wet paper codes to steganography and data embedding in binary images are discussed.
- Published
- 2005
28. Joint Detection for a Bandwidth Efficient Modulation Method
- Author
-
Lenan Wu, Zhonghui Mei, and Shiyuan Zhang
- Subjects
Parity-check matrix ,Narrowband ,Bandwidth efficient ,Modulation ,Computer science ,business.industry ,Electronic engineering ,Data_CODINGANDINFORMATIONTHEORY ,Low-density parity-check code ,Telecommunications ,business ,Computer Science::Information Theory - Abstract
A bandwidth efficient communication model is propoded via a narrowband band-pass filter at the transmitting end of the system. In order to reduce the ISI caused by the filter, a joint detection algorithm based on low-density parity check matrix codes (LDPC) is presented.
- Published
- 2005
29. High Rate UWB-LDPC Code and Its Soft Initialization
- Author
-
Jia Hou and Moon Ho Lee
- Subjects
Parity-check matrix ,Computer science ,Pulse-position modulation ,Initialization ,Low-density parity-check code ,Algorithm ,Decoding methods ,Computer Science::Other ,Computer Science::Information Theory ,Phase-shift keying - Abstract
In this paper, we describe a kind of high rate low density parity check (LDPC) code and its decoding scheme for ultra wideband (UWB) communication. Particularly, the proposed scheme uses a hybrid soft normalization algorithm from the autocorrelations of UWB signals to initialize the LDPC decoder, and it integrates several normalized soft values of UWB signals as sine (BPSK) or cosine (PPM) elements by using the window concept.
- Published
- 2005
30. Beyond Boosting: Recursive ECOC Learning Machines
- Author
-
Alexander Hütermann, José Carlos González, Elizabeth Tapia, and Javier García
- Subjects
Boosting (machine learning) ,business.industry ,Computer science ,Decision tree ,Machine learning ,computer.software_genre ,BrownBoost ,Parity-check matrix ,Artificial intelligence ,Decision stump ,Gradient boosting ,Low-density parity-check code ,business ,computer - Abstract
We present a wide experimental work evaluating the behaviour of Recursive ECOC (RECOC) [1] learning machines based on Low Density Parity Check (LDPC) coding structures. We show that owing to the iterative decoding algorithms behind LDPC codes, RECOC multiclass learning is progressively achieved. This learning behaviour confirms the existence of new boosting dimension, the one provided by the coding space. We present a method for searching potential good RECOC codes from LDPC ones. Starting from a properly selected LDPC code, we assess the effect of boosting in both weak and strong binary learners. For nearly all domains, we find that boosting a strong learner like a Decision Tree is as effective as boosting a weak one like a Decision Stump. This surprising result substantiates the hypothesis that weakening strong classifiers by boosting has a decorrelation effect, which can be used to improve RECOC learning.
- Published
- 2004
31. Analysis of FEC Codes for Partially Reliable Media Broadcasting Schemes
- Author
-
Vincent Roca and Christoph P. Neumann
- Subjects
Router ,Computer science ,Network packet ,business.industry ,ComputerSystemsOrganization_COMPUTER-COMMUNICATIONNETWORKS ,Packet erasure channel ,Data_CODINGANDINFORMATIONTHEORY ,Parity-check matrix ,Forward error correction ,Error detection and correction ,business ,Decoding methods ,Computer network ,Communication channel - Abstract
With many multimedia delivery schemes a client does not necessarily receive a given media content entirely (e.g. router congestions can lead some information to be lost). Mechanisms like error concealment and error resilient video coding allow the receiver to deal with partially received data and to play the content, with decreased quality though. Packet-level Forward Error Correction (FEC) can be used as a complementary technique to counter the effects of losses and reconstruct the missing packets. However if the number of packets received is too low for the FEC decoding process to finish, the received parity packets may turn out to be useless, and finally more source packets may be unavailable to the application than if FEC had not been used at all. This paper analyzes the adequacy of the LDGM Staircase, LDGM Triangle and RSE FEC codes to offer a partial reliability service for media content distribution over any kind of packet erasure channel, that is to say a service that enables a receiver to reconstruct parts of the content even if the FEC decoding process has not finished. We analyze this service in the context of a broadcasting system having no feedback channel and that offers media content distribution, like Digital Video/Audio Broadcasting.
- Published
- 2004
32. 9 New, Expander-Based List Decodable Codes
- Author
-
Venkatesan Guruswami
- Subjects
Discrete mathematics ,Block code ,Parity-check matrix ,TheoryofComputation_MATHEMATICALLOGICANDFORMALLANGUAGES ,ComputingMilieux_THECOMPUTINGPROFESSION ,Computer science ,Concatenated error correction code ,Reed–Muller code ,Expander graph ,Binary code ,Data_CODINGANDINFORMATIONTHEORY ,Locally decodable code ,Linear code - Abstract
In the previous chapters, we have already seen constructions of asymptotically good codes of good rate over both large alphabets (the AG-codes from Chapter 6) and the binary alphabet (the concatenated codes from Chapter 8), that are efficiently list decodable up to a “maximum” possible radius.
- Published
- 2004
33. 2 Preliminaries and Monograph Structure
- Author
-
Venkatesan Guruswami
- Subjects
Structure (mathematical logic) ,Code (set theory) ,Parity-check matrix ,Work (electrical) ,Computer science ,Programming language ,computer.software_genre ,Notation ,Linear code ,computer - Abstract
In this chapter, we review the basic definitions relating to error-correcting codes and standardize some notation. We then give a brief description of the fundamental code families and constructions that will be dealt with and used in this book. Finally, we discuss the structure of this work and the main results which are established in the technical chapters that follow, explaining in greater detail how the results of the various chapters fit together.
- Published
- 2004
34. Sparse Parity-Check Matrices over Finite Fields
- Author
-
Hanno Lefmann
- Subjects
Algebra ,Combinatorics ,Parity-check matrix ,Finite field ,Integer ,Parity (mathematics) ,Prime power ,Upper and lower bounds ,M-matrix ,Mathematics - Abstract
For fixed positive integers k, q, r with q a prime power and large m, we investigate matrices with m rows and a maximum number Nq(m, k, r) of columns, such that each column contains at most r nonzero entries from the finite field GF(q) and each k columns are linearly independent over GF(q). For even k we prove the lower bounds Nq(m, k, r) = Ω(mkr/(2(k-1))), and Nq(m, k, r) = Ω(m(k-1)r/(2(k-2))) for odd k ≥ 3. For k = 2i and gcd(k-1, r) = k-1 we obtain Nq(m, k, r) = Θ(mkr/(2(k-1))), while for any even k ≥ 4 and gcd(k - 1, r) = 1 we have Nq(m, k, r) = Ω(mkr/(2(k-1)) ċ (log m)1/(k-1)). For char (GF(q)) > 2 we prove that Nq(m, 4, r) = Θ(m⌈4r/3⌉/2), while for q = 2l we only have Nq(m, 4, r) = O(m⌈4r/3⌉/2). We can find matrices, fulfilling these lower bounds, in polynomial time. Our results extend and complement earlier results from [4,14], where the case q = 2 was considered.
- Published
- 2003
35. A Geometric View of Decoding AG Codes
- Author
-
Drue Coles and Thanasis Bouganis
- Subjects
Algebra ,Parity-check matrix ,Finite field ,Line bundle ,Vector bundle ,Local parameter ,Coding theory ,Linear code ,Decoding methods ,Mathematics - Abstract
We investigate the use of vector bundles over finite fields to obtain a geometric view of decoding algebraic-geometric codes. Building on ideas of Trygve Johnsen, who revealed a connection between the errors in a received word and certain vector bundles on the underlying curve, we give explicit constructions of the relevant geometric objects and efficient algorithms for some general computations needed in the constructions. The use of vector bundles to understand decoding as a geometric process is the first application of these objects to coding theory.
- Published
- 2003
36. Cluttered Orderings for the Complete Graph
- Author
-
Charles J. Colbourn, Dalibor Froncek, and Myra B. Cohen
- Subjects
Computer science ,Complete graph ,Mixed graph ,De Bruijn graph ,Vertex (geometry) ,Combinatorics ,Parity-check matrix ,symbols.namesake ,Steiner system ,Cycle graph ,symbols ,Erasure ,Multiple edges ,Erasure code - Abstract
In a systematic erasure code for the correction of two simultaneous erasures, each information symbol must have two associated parity symbols. When implemented in a redundant array of independent disks (RAID), performance requirements on the update penalty necessitate that each information symbol be associated with no more parity symbols than the two required. This leads to a simple graph model of the erasure codes, with parity symbols as vertices and information symbols as edges. Ordering the edges so that no more than f check disks (vertices) appear among any set of d consecutive edges is found to optimize access performance of the disk array when d is maximized. These cluttered orderings are examined for the complete graph Kn. The maximum number d of edges is determined precisely when f ≤ 5 and when f = n - 1, and bounds are derived in the remaining cases.
- Published
- 2001
37. Algebraic Constructions for PSK Space-Time Coded Modulation
- Author
-
Alex Grant, Andrew M. Guidi, and Steven S. Pietrobon
- Subjects
Block code ,Parity-check matrix ,Algebraic number ,Algorithm ,Linear code ,Polynomial matrix ,Decoding methods ,Phase-shift keying ,Mathematics ,Characteristic polynomial - Abstract
We consider the design of phase shift keyed space-time coded modulation for two antenna systems based on linear codes over rings. Design rules for constructing full diversity systematic space-time codes based on underlying existing algebraic codes were first presented by Hammons and El Gamal in 2000. We reformulate and simplify these design rules, resulting in the condition that the characteristic polynomial of the parity generation matrix must be irreducible. We further extend the results to non-systematic codes. These results yield a recursive construction based on the Schur determinant formula. The resulting block codes are guranteed to provide full diversity advantage. In addition, the code construction is such that the corresponding parity check matrix is sparse, enabling the use of the powerful Sum-Product algorithm for decoding.
- Published
- 2001
38. Recursive Adaptive ECOC models
- Author
-
Elizabeth Tapia, José Carlos González, Javier García-Villalba, and Julio Villena
- Subjects
Parity-check matrix ,Adaptive algorithm ,Computer science ,Iterative method ,Low-density parity-check code ,Error detection and correction ,Algorithm ,Factor graph ,Decoding methods ,Parity bit - Abstract
A general framework for the design of error adaptive learning algorithms in multiple output domains based on Dietterich's ECOC approach, recursive error output correcting codes and iterative APP decoding methods is proposed. A particular class of these Recursive ECOC (RECOC) learning algorithms based on Low Density Parity Check is presented.
- Published
- 2001
39. Optimal and Pessimal Orderings of Steiner Triple Systems in Disk Arrays
- Author
-
Myra B. Cohen and Charles J. Colbourn
- Subjects
Discrete mathematics ,RAID ,Disk array ,Column (database) ,Steiner tree problem ,law.invention ,Combinatorics ,Parity-check matrix ,symbols.namesake ,Combinatorial design ,law ,symbols ,Erasure code ,Mathematics - Abstract
Steiner triple systems are well studied combinatorial designs that have been shown to possess properties desirable for the construction of multiple erasure codes in RAID architectures. The ordering of the columns in the parity check matrices of these codes affects system performance. Combinatorial problems involved in the generation of good and bad column orderings are defined, and examined for small numbers of accesses to consecutive data blocks in the disk array.
- Published
- 2000
40. Modifications of the Rao-Nam Cryptosystem
- Author
-
Angela I. Barbero and Øyvind Ytrehus
- Subjects
Scheme (programming language) ,Parity-check matrix ,Goppa code ,Key (cryptography) ,Error correcting ,Cryptosystem ,Arithmetic ,computer ,Algorithm ,computer.programming_language ,Mathematics - Abstract
Rao and Nam [7] proposed a secret-key cryptosystem based on error correcting codes. After breaking the original system by a chosenplaintext attack, Struik and van Tilburg [8] improved the Rao-Nam cryptosystem. However, the size of the key remains a practical problem also for their improved scheme. We discuss several modifications of the improved Rao-Nam system. The goal of these modifications is to reduce the amount of secret key that needs to be exchanged, while maintaining the security of the system.1
- Published
- 2000
41. Identification of Bad Signatures in Batches
- Author
-
Dariusz Michatek, Josef Pieprzyk, Jaroslaw Pastuszak, and Jennifer Seberry
- Subjects
Public-key cryptography ,Parity-check matrix ,Digital signature ,Computer science ,business.industry ,business ,Measure (mathematics) ,Algorithm ,Hamming code ,Signature (logic) ,Parity bit - Abstract
The paper addresses the problem of bad signature identification in batch verification of digital signatures. The number of generic tests necessary to identify all bad signatures in a batch instance, is used to measure the efficiency of verifiers. The divide-and-conquer verifier DCV α (x,n) is defined. The verifier identifies all bad signatures in a batch instance x of the length n by repeatedly splitting the input into α sub-instances. Its properties are investigated. In particular, probability distributions for the number of generic tests necessary to identify one, two and three bad signatures, are derived. The average numbers of GT tests necessary to identify bad signatures ranging from 1 to 16 are obtained from computer simulation. Further, a Hamming verifier (HV) is defined which allows to identify a single bad signature in a batch of the length n=2 k -1 using k+2 tests. HV is generalised into the two-layer Hamming verifier (2HV). Given a batch instance of the length 2 k − 2, the 2HV verifier identifies a single bad signature using k+2 tests and two bad signatures in expense of 3k+3 tests. The work is concluded by comments about a general model for verification codes identifying t bad signatures and the design of verifiers using combinatorial structures.
- Published
- 2000
42. Low Complexity Soft-Decision Sequential Decoding Using Hybrid Permutation for Reed-Solomon Codes
- Author
-
Min-seok Oh and Peter Sweeney
- Subjects
Computer science ,Error floor ,Berlekamp–Welch algorithm ,List decoding ,Parity of a permutation ,Data_CODINGANDINFORMATIONTHEORY ,Sequential decoding ,Permutation group ,Parity-check matrix ,Permutation ,Reed–Solomon error correction ,Algorithm ,Decoding methods ,Computer Science::Information Theory - Abstract
We present a soft-decision decoding method for Reed-Solomon codes (RS codes) using both cyclic and squaring permutations. These permutations are used to provide a convenient sequence which is predicted to have relatively low complex error pattern with respect to a modified Fano sequential algorithm[1]. In order to preserve bit-level soft-decision values, each sequence of those permutation groups must keep equal weight distribution in symbol and bit level. Trellis construction is based on Wolf's method[2] and a binary systematic parity check matrix of RS codes is used for bit-level decoding[9]. In simulation results, it is shown that a hybrid of those two permutations can be used for low complexity decoding approaching maximum likelihood performance.
- Published
- 1999
43. Gröbner Bases for Decoding
- Author
-
GR Ruud Pellikaan and MA Mario de Boer
- Subjects
Parity-check matrix ,Finite field ,Binary Golay code ,Computer science ,Cyclic code ,Encoding (memory) ,Finite geometry ,Arithmetic ,Linear code ,Decoding methods - Abstract
From the previous chapter one might get the impression that the theory of error-correcting codes is equivalent to the theory of finite geometry or arrangements over finite fields. This is not true from a practical point of view. A code is useless without a decoding algorithm. For engineers the total performance of the encoding and decoding scheme is important.
- Published
- 1999
44. Security comments on the Hwang-Chen algebraic-code cryptosystem
- Author
-
Mohssen Alabbadi
- Subjects
Discrete mathematics ,Computer science ,business.industry ,Dimension (graph theory) ,Cryptography ,Data_CODINGANDINFORMATIONTHEORY ,Encryption ,Secret sharing ,Public-key cryptography ,Parity-check matrix ,Goppa code ,Code (cryptography) ,Cryptosystem ,business ,Algorithm - Abstract
Hwang and Chen have proposed a private-key cryptosystem that provides joint error correction, encryption, and “message” integrity. The scheme is based on algebraic error-correcting codes, using random chaining technique. It was shown that obtaining a combinatorially equivalent code, under ciphertext-only attack, requires O(k2n) operations and O(k2n/2) ciphertexts, where n and k are the length and dimension of the code respectively. It was further claimed that obtaining an equivalent code is not sufficient to “totally” break the system. In this paper, a chosen-plaintext attack is presented that is able to break the system, requiring O[(n - k)2k/2] ciphertexts and O[(n - k)2 k] operations; the attack is based on obtaining a combinatorially equivalent code. Finally, a modified version of the scheme is proposed that overcomes the weaknesses of the original Hwang-Chen scheme; the complexity to break this modified scheme is O(k2n/2) ciphertexts and O(kn2n/2) operations.
- Published
- 1997
45. An iterative probabilistic decoding algorithm for binary linear block codes beyond the half minimum distance
- Author
-
Miodrag J. Mihaljevic
- Subjects
Block code ,Parity-check matrix ,Berlekamp–Welch algorithm ,Concatenated error correction code ,Data_CODINGANDINFORMATIONTHEORY ,Sequential decoding ,Error detection and correction ,Linear code ,Algorithm ,Decoding methods ,Mathematics - Abstract
An algorithm for decoding of linear binary block codes based on the iterative probabilistic error-correction is considered theoretically and experimentally. Assuming the sparse code parity-check matrix it is demonstrated that, with probability close to one, the algorithm works far beyond the half minimum distance with complexity linearly proportional to the codeword length and the average number of per bit employed parity-checks.
- Published
- 1997
46. The cryptographic security of the syndrome decoding problem for rank distance codes
- Author
-
Jacques Stern and Florent Chabaud
- Subjects
Discrete mathematics ,Combinatorics ,Parity-check matrix ,Rank (linear algebra) ,Berlekamp–Welch algorithm ,Concatenated error correction code ,List decoding ,Sequential decoding ,Linear code ,Decoding methods ,Mathematics - Abstract
We present an algorithm that achieves general syndrome decoding of a (n, k, r) linear rank distance code over GF(q m ) in O(nr + m)3q(m−r)(r−1)) elementary operations. As a consequence, the cryptographical schemes [Che94, Che96] which rely on this problem are not secure with the proposed parameters. We also derive from our algorithm a bound on the minimal rank distance of a linear code which shows that the parameters from [Che94] are inconsistent.
- Published
- 1996
47. Dynamic declustering methods for parallel grid files
- Author
-
Paolo Ciaccia and Arianna Veronesi
- Subjects
Structure (mathematical logic) ,Set (abstract data type) ,Parity-check matrix ,Range query (data structures) ,Computer science ,Grid file ,Data_FILES ,Grid reference ,Directory ,Parallel computing ,Grid - Abstract
Several declustering functions for distributing multi-attribute data on a set of disks have been proposed in recent years. Since these functions map grid regions to disks in a static way, performance deteriorates in case of dynamic datasets and/or non-stationary data distributions. We first analyze how declustering functions can be extended in order to deal with dynamic datasets without requiring periodic reorganizations. In order to support dynamic declustering, we propose to organize the directory as a parallel Multilevel Grid File. On this structure we experiment five dynamic declustering functions and two index-based allocation methods that only use locally available information. This first comparison among the two approaches reveals that methods based on local criteria always yield better results.
- Published
- 1996
48. On sparse parity check matrices (extended abstract)
- Author
-
Hanno Lefmann, Petr Savický, and Pavel Pudlák
- Subjects
Discrete mathematics ,Combinatorics ,Matrix (mathematics) ,Parity-check matrix ,Steiner system ,Modulo ,Linear independence ,Parity (mathematics) ,Upper and lower bounds ,Linear code ,Mathematics - Abstract
We consider the extremal problem to determine the maximal number N(m, k, r) of columns of a 0–1 matrix with m rows and at most r ones in each column such that each k columns are linearly independent modulo 2. For fixed integers k ≥ 2 and r ≥ 1, we show the probabilistic lower bound N(m, k, r) = Ω(mkr/2(k−1)); for k a power of 2, we prove the upper bound N(m, k, r) = O(n[kr/(k−1)]/2), which matches the lower bound for infinitely many values of r. We give some explicit constructions.
- Published
- 1996
49. An Efficient Pseudo-Random Generator Provably as Secure as Syndrome Decoding
- Author
-
Jacques Stern and Jean-Bernard Fischer
- Subjects
Scheme (programming language) ,Parity-check matrix ,Theoretical computer science ,Quadratic equation ,Generator (computer programming) ,Simple (abstract algebra) ,Computer science ,Sequential decoding ,Pseudorandom generator ,Algorithm ,computer ,Decoding methods ,computer.programming_language - Abstract
We show a simple and efficient construction of a pseudo-random generator based on the intractability of an NP-complete problem from the area of error-correcting codes. The generalor is proved as secure as a hard instance of the syndrome decoding problem. Each application of the scheme generates a linear amount of bits in only quadratic computing time.
- Published
- 1996
50. On the security of some cryptosystems based on error-correcting codes
- Author
-
Florent Chabaud
- Subjects
Computer science ,Authentication scheme ,Computer security ,computer.software_genre ,Linear code ,Parity-check matrix ,symbols.namesake ,Number theory ,Gaussian elimination ,Error correcting ,symbols ,Hybrid cryptosystem ,Cryptosystem ,computer ,Algorithm ,Computer Science::Cryptography and Security - Abstract
A certain number of public-key cryptosystems based on error-correcting codes have been proposed as an alternative to algorithms based on number theory. In this paper, we analyze algorithms that can be used to attack such cryptosystems in a very precise way, and optimize them. Thus, we obtain some more efficient attacks than those previously known. Even if they remain unfeasible, they indicate the cryptosystems parameters forbidden by the existence of these algorithms.
- Published
- 1995
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.