123,354 results
Search Results
2. Overlap Detection in 2D Amorphous Shapes for Paper Optimization in Digital Printing Presses
- Author
-
Rafael Rivera-López, Juan Manuel Rendón-Mancha, Marco Antonio Cruz-Chávez, Yainier Labrada-Nueva, Marta Lilia Eraña-Díaz, and Martín H. Cruz-Rosales
- Subjects
Computer science ,Iterated local search ,General Mathematics ,0211 other engineering and technologies ,resource allocation ,02 engineering and technology ,neighborhood structure ,Reduction (complexity) ,overlaps ,perturbations ,0202 electrical engineering, electronic engineering, information engineering ,Computer Science (miscellaneous) ,QA1-939 ,Engineering (miscellaneous) ,Structure (mathematical logic) ,021103 operations research ,business.industry ,paper waste ,amorphous shapes ,Amorphous solid ,Resource allocation ,020201 artificial intelligence & image processing ,Digital printing ,business ,Algorithm ,Mathematics - Abstract
Paper waste in the mockups design with regular, irregular, and amorphous patterns is a critical problem in digital printing presses. Paper waste reduction directly impacts production costs, generating business and environmental benefits. This problem can be mapped to the two-dimensional irregular bin-packing problem. In this paper, an iterated local search algorithm using a novel neighborhood structure to detect overlaps between amorphous shapes is introduced. This algorithm is used to solve the paper waste problem, modeled as one 2D irregular bin-packing problem. The experimental results show that this approach works efficiently and effectively to detect and correct the overlaps between regular, irregular, and amorphous figures.
- Published
- 2021
3. A Parameterization Approach for the Dielectric Response Model of Oil Paper Insulation Using FDS Measurements
- Author
-
Lijun Yang, Ran Liman, Peng He, Chao Wei, Youyuan Wang, Feng Yang, and Lin Du
- Subjects
Control and Optimization ,Akaike’s information criterion ,Frequency band ,020209 energy ,Energy Engineering and Power Technology ,02 engineering and technology ,01 natural sciences ,Capacitance ,dielectric response ,lcsh:Technology ,oil paper insulation ,frequency domain spectroscopy ,extended Debye model ,parameterization ,syncretic algorithm ,Goodness of fit ,0103 physical sciences ,0202 electrical engineering, electronic engineering, information engineering ,Time domain ,Electrical and Electronic Engineering ,Engineering (miscellaneous) ,Mathematics ,010302 applied physics ,Renewable Energy, Sustainability and the Environment ,lcsh:T ,Transformation (function) ,Akaike information criterion ,Algorithm ,Energy (signal processing) ,Energy (miscellaneous) ,Voltage - Abstract
To facilitate better interpretation of dielectric response measurements—thereby directing numerical evidence for condition assessments of oil-paper-insulated equipment in high-voltage alternating current (HVAC) transmission systems—a novel approach is presented to estimate the parameters in the extended Debye model (EDM) using wideband frequency domain spectroscopy (FDS). A syncretic algorithm that integrates a genetic algorithm (GA) and the Levenberg-Marquardt (L-M) algorithm is introduced in the present study to parameterize EDM using the FDS measurements of a real-life 126 kV oil-impregnated paper (OIP) bushing under different controlled temperatures. As for the uncertainty of the EDM structure due to variable branch quantity, Akaike’s information criterion (AIC) is employed to determine the model orders. For verification, comparative analysis of FDS reconstruction and results of FDS transformation to polarization–depolarization current (PDC)/return voltage measurement (RVM) are presented. The comparison demonstrates good agreement between the measured and reconstructed spectroscopies of complex capacitance and tan δ over the full tested frequency band (10−4 Hz–103 Hz) with goodness of fit over 0.99. Deviations between the tested and modelled PDC/RVM from FDS are then discussed. Compared with the previous studies to parameterize the model using time domain dielectric responses, the proposed method solves the problematic matching between EDM and FDS especially in a wide frequency band, and therefore assures a basis for quantitative insulation condition assessment of OIP-insulated apparatus in energy systems.
- Published
- 2018
4. On the Dispersions of the Gel’fand–Pinsker Channel and Dirty Paper Coding
- Author
-
Jonathan Scarlett
- Subjects
Independent and identically distributed random variables ,Gaussian ,Variable-length code ,Data_CODINGANDINFORMATIONTHEORY ,Library and Information Sciences ,Computer Science Applications ,Gel’fand-Pinsker channel ,Combinatorics ,symbols.namesake ,Second-order coding rate ,Shannon–Fano coding ,Dirty paper coding ,symbols ,Channel dispersion ,Channels with state ,Algorithm ,Encoder ,Decoding methods ,Information Systems ,Mathematics ,Coding (social sciences) - Abstract
This paper studies the second-order coding rates for memoryless channels with a state sequence known non-causally at the encoder. In the case of finite alphabets, an achievability result is obtained using constant-composition random coding, and by using a small fraction of the block to transmit the empirical distribution of the state sequence. For error probabilities less than 0.5, it is shown that the second-order rate improves on an existing one based on independent and identically distributed random coding. In the Gaussian case (dirty paper coding) with an almost-sure power constraint, an achievability result is obtained using random coding over the surface of a sphere, and using a small fraction of the block to transmit a quantized description of the state power. It is shown that the second-order asymptotics are identical to the single-user Gaussian channel of the same input power without a state.
- Published
- 2015
5. Estimating the DP Value of the Paper Insulation of Oil-Filled Power Transformers Using an ANFIS Algorithm
- Author
-
Peter Werle, T. Kinkeldey, Suwarno Suwarno, and Firza Zulmi Rhamadhan
- Subjects
Surface tension ,Adaptive neuro fuzzy inference system ,law ,Limit (music) ,Breakdown voltage ,Cellulose insulation ,Transformer ,Space partitioning ,Cluster analysis ,Algorithm ,Mathematics ,law.invention - Abstract
The condition of the transformer insulation has an impact on the transformer’s performance during the operation. The aging of the oil-impregnated cellulose insulation and the associated loss of mechanical strength are the important factors that limit the life of expectancy of a transformer. To determine the condition of the oil-impregnated cellulose insulation, the Degree of Polymerization (DP) parameter is commonly used. An Adaptive Neuro-Fuzzy Inference System (ANFIS) has been developed to predict the DP Value by the chemical characteristics and dissolved gas parameters (acidity, interfacial tension, CO, CO 2 , breakdown voltage, and water content of the oil). This paper generates some algorithms which are based on the input space partitioning method to generate rules (grid partition or subtractive clustering) and data is normalized or not. The estimation result has been observed and evaluated to provide that the ANFIS algorithm is suitable to estimate insulation condition on field operating transformers.
- Published
- 2021
6. Learning regularization parameters of inverse problems via deep neural networks:Paper
- Author
-
Julianne Chung, Matthias Chung, and Babak Maboudi Afkham
- Subjects
FOS: Computer and information sciences ,Computer Science - Machine Learning ,Bilevel optimization ,010103 numerical & computational mathematics ,01 natural sciences ,Regularization (mathematics) ,Theoretical Computer Science ,Machine Learning (cs.LG) ,Bayes' theorem ,Design objective ,Deep neural networks ,Regularization ,FOS: Mathematics ,Mathematics - Numerical Analysis ,0101 mathematics ,Representation (mathematics) ,Mathematical Physics ,Mathematics ,Applied Mathematics ,Supervised learning ,Deep learning ,Numerical Analysis (math.NA) ,Inverse problem ,Computer Science Applications ,010101 applied mathematics ,Optimal experimental design ,Signal Processing ,Minification ,Algorithm ,Hyperparameter selection - Abstract
In this work, we describe a new approach that uses deep neural networks (DNN) to obtain regularization parameters for solving inverse problems. We consider a supervised learning approach, where a network is trained to approximate the mapping from observation data to regularization parameters. Once the network is trained, regularization parameters for newly obtained data can be computed by efficient forward propagation of the DNN. We show that a wide variety of regularization functionals, forward models, and noise models may be considered. The network-obtained regularization parameters can be computed more efficiently and may even lead to more accurate solutions compared to existing regularization parameter selection methods. We emphasize that the key advantage of using DNNs for learning regularization parameters, compared to previous works on learning via optimal experimental design or empirical Bayes risk minimization, is greater generalizability. That is, rather than computing one set of parameters that is optimal with respect to one particular design objective, DNN-computed regularization parameters are tailored to the specific features or properties of the newly observed data. Thus, our approach may better handle cases where the observation is not a close representation of the training set. Furthermore, we avoid the need for expensive and challenging bilevel optimization methods as utilized in other existing training approaches. Numerical results demonstrate the potential of using DNNs to learn regularization parameters., 27 pages, 16 figures
- Published
- 2021
7. Feature Extraction of PD in Oil-paper Insulation based on Three-Parameter Weibull Distribution
- Author
-
Hongru Zhang, Wenxiu Gong, Bin An, Qingquan Li, Jiawen Zhou, and Mu Qiao
- Subjects
Optimization algorithm ,Feature extraction ,Particle swarm optimization ,02 engineering and technology ,01 natural sciences ,law.invention ,010101 applied mathematics ,020303 mechanical engineering & transports ,0203 mechanical engineering ,law ,Partial discharge ,Condensed Matter::Strongly Correlated Electrons ,0101 mathematics ,Transformer ,Algorithm ,Weibull distribution ,Mathematics - Abstract
Oil paper insulation is the main insulation form of large transformers. Partial discharge(PD) in oil paper insulation will not only damage the insulation performance, but also is the precursor and manifestation of insulation deterioration. Weibull distribution parameter is an important characteristic parameter of PD. In this paper a method is proposed to extract the characteristic parameter of PD based on the three-parameter Weibull model, then gray wolf optimization algorithm and particle swarm optimization algorithm are used to calculate the parameters of the model. The result shows that the two algorithms can get the parameters quickly and accurately, and the model can effectively reflect the characteristics of the $H_{n}(q)$ spectrum of PD times.
- Published
- 2020
8. A Density Evolution Based Framework for Dirty-Paper Code Design Using TCQ and Multilevel LDPC Codes
- Author
-
Wu Yu-chun, Philipp Zhang, Zixiang Xiong, and Yang Yang
- Subjects
Theoretical computer science ,Iterative method ,Quantization (signal processing) ,Data_CODINGANDINFORMATIONTHEORY ,Evolutionary computation ,Computer Science Applications ,Modeling and Simulation ,Differential evolution ,Bit error rate ,Dirty paper coding ,Electrical and Electronic Engineering ,Low-density parity-check code ,Algorithm ,Decoding methods ,Computer Science::Information Theory ,Mathematics - Abstract
We propose a density evolution based dirty-paper code design framework that combines trellis coded quantization with multi-level low-density parity-check (LDPC) codes. Unlike existing design techniques based on Gaussian approximation and EXIT charts, the proposed framework tracks the empirically collected log-likelihood ratio (LLR) distributions at each iteration, and employs density evolution and differential evolution algorithms to design each LDPC component code. The performance of the dirty-paper codes designed using the proposed method comes within 0.37 dB of the theoretical limit at 1 bit per sample transmission rate, achieving an 0.21 dB gain over the best known result.
- Published
- 2012
9. Wet paper codes and the dual distance in steganography
- Author
-
Carlos Munuera, Morgan Barbier, Universidad de Valladolid [Valladolid] (UVa), Algorithmic number theory for cryptology (TANC), Laboratoire d'informatique de l'École polytechnique [Palaiseau] (LIX), École polytechnique (X)-Centre National de la Recherche Scientifique (CNRS)-École polytechnique (X)-Centre National de la Recherche Scientifique (CNRS)-Inria Saclay - Ile de France, Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria), and This work was supported in part by Junta de CyL under grant VA065A07 and by Spanish Ministry for Science and Technology under grants MTM2007-66842-C02-01 and MTM 2007-64704
- Subjects
FOS: Computer and information sciences ,Computer Science - Cryptography and Security ,Computer Networks and Communications ,Computer Science - Information Theory ,Microbiology ,Image (mathematics) ,[INFO.INFO-CR]Computer Science [cs]/Cryptography and Security [cs.CR] ,ACM: E.: Data/E.4: CODING AND INFORMATION THEORY/E.4.1: Error control codes ,Distortion ,Discrete Mathematics and Combinatorics ,Steganography ,wet paper codes ,Mathematics ,Algebra and Number Theory ,Information Theory (cs.IT) ,Applied Mathematics ,[MATH.MATH-IT]Mathematics [math]/Information Theory [math.IT] ,Coding theory ,Linear code ,ACM: G.: Mathematics of Computing/G.2: DISCRETE MATHEMATICS/G.2.3: Applications ,dual distance ,[INFO.INFO-IT]Computer Science [cs]/Information Theory [cs.IT] ,Embedding ,Orthogonal array ,Error detection and correction ,Error-correcting codes ,Cryptography and Security (cs.CR) ,Algorithm - Abstract
In 1998 Crandall introduced a method based on coding theory to secretly embed a message in a digital support such as an image. Later, in 2005, Fridrich et al. improved this method to minimize the distortion introduced by the embedding; a process called wet paper. However, as previously emphasized in the literature, this method can fail during the embedding step. Here we find sufficient and necessary conditions to guarantee a successful embedding, by studying the dual distance of a linear code. Since these results are essentially of combinatorial nature, they can be generalized to systematic codes, a large family containing all linear codes. We also compute the exact number of embedding solutions and point out the relationship between wet paper codes and orthogonal arrays.
- Published
- 2012
10. A Robust Multi-Level Design for Dirty-Paper Coding
- Author
-
Yan Xin, Xiaodong Wang, Momin Uppal, Guosen Yue, and Zixiang Xiong
- Subjects
Theoretical computer science ,Concatenated error correction code ,Variable-length code ,List decoding ,Distributed source coding ,Dirty paper coding ,Forward error correction ,Electrical and Electronic Engineering ,Low-density parity-check code ,Algorithm ,Coding gain ,Mathematics - Abstract
We propose a robust close-to-capacity dirty-paper coding (DPC) design framework in which multi-level low density parity check (LDPC) codes and trellis coded quantization (TCQ) are employed as the channel and source coding components, respectively. The proposed design framework is robust in the sense that it yields close to capacity solutions in the high-, medium-, and low-rate regimes. This is in contrast to existing practical DPC schemes that perform well only in one or two of these regimes, but not all three. We design codes for transmission rates of 0.5, 1.0, 1.5, and 2.0 bits/sample (b/s) using one, two, three, and four LDPC levels; at a block length of 2×105, the codes perform 0.95, 0.58, 0.55, and 0.54 dB from the corresponding information theoretic limits, respectively. We also propose a low-complexity decoding scheme that does not involve iterative message passing between the source and channel decoders; the low-complexity scheme performs only 1.08, 0.85, and 0.79 dB away from the theoretical limits at transmission rates of 1.0, 1.5, and 2.0 b/s, respectively.
- Published
- 2013
11. Toward a Practical Scheme for Binary Broadcast Channels with Varying Channel Quality Using Dirty Paper Coding
- Author
-
Chih-Chun Wang and Gyu Bum Kyung
- Subjects
Quantization (signal processing) ,Binary number ,Data_CODINGANDINFORMATIONTHEORY ,Binary erasure channel ,Binary symmetric channel ,Electronic engineering ,Binary code ,Dirty paper coding ,Electrical and Electronic Engineering ,Algorithm ,Decoding methods ,Computer Science::Information Theory ,Mathematics ,Communication channel - Abstract
We consider practical schemes for binary dirty-paper channels and broadcast channels (BCs) with two receivers and varying channel quality. With the BC application in mind, this paper proposes a new design for binary dirty paper coding (DPC). By exploiting the concept of coset binning, the complexity of the system is greatly reduced when compared to the existing works. Some design challenges of the coset binning approach are identified and addressed. The proposed binary DPC system achieves similar performance to the state-of-the-art, superposition-coding-based system while demonstrating significant advantages in terms of complexity and flexibility of system design. For binary BCs, achieving the capacity generally requires the superposition of a normal channel code and a carefully designed channel code with non-uniform bit distribution. The non-uniform bit distribution is chosen according to the channel conditions. Therefore, to achieve the capacity for binary BCs with varying channel quality, it is necessary to use quantization codes of different rates, which significantly increases the implementation complexity. In this paper, we also propose a broadcast scheme that generalizes the concept of binary DPC, which we term soft DPC. By combining soft DPC with time sharing, we achieve a large percentage of the capacity for a wide range of channel quality with little complexity overhead. Our scheme uses only one fixed pair of codes for users 1 and 2, and a single quantization code, which possesses many practical advantages over traditional time sharing and superposition coding solutions and provides strictly better performance.
- Published
- 2011
12. Call for Paper - International Journal of Soft Computing,Mathematics and Control
- Author
-
Abdulaziz Alajlan
- Subjects
algorithm ,mathematics ,soft computing ,computer - Abstract
International Journal of Soft Computing, Mathematics and Control (IJSCMC)is a Quarterly peer-reviewed and refereed open access journal that publishes articles which contribute new results in all areas of Soft Computing, Pure, Applied and Numerical Mathematics and Control. The focus of this new journal is on all theoretical and numerical methods on soft computing, mathematics and control theory with applications in science and industry. The goal of this journal is to bring together researchers and practitioners from academia and industry to focus on latest topics of soft computing, pure, applied and numerical mathematics and control engineering, and establishing new collaborations in these areas. Authors are solicited to contribute to this journal by submitting articles that illustrate new algorithms, theorems, modelling results, research results, projects, surveying works and industrial experiences that describe significant advances in Soft Computing, Mathematics and Control Engineering Learn More
- Published
- 2020
- Full Text
- View/download PDF
13. Performance Analysis of Binned Orthogonal/Bi-Orthogonal Block Code as Dirty-Paper Code for Digital Watermarking Application
- Author
-
Kah Chan Teh, Xiaotian Xu, and Yong Liang Guan
- Subjects
Block code ,Theoretical computer science ,Applied Mathematics ,Code word ,Data_CODINGANDINFORMATIONTHEORY ,Code rate ,Signal Processing ,Code (cryptography) ,Bit error rate ,Dirty paper coding ,Hardware_ARITHMETICANDLOGICSTRUCTURES ,Electrical and Electronic Engineering ,Digital watermarking ,Algorithm ,Decoding methods ,Computer Science::Information Theory ,Mathematics - Abstract
A binned dirty-paper code (DPC) divides a set of codewords into a number of bins. The codeword in the bin that has the maximum correlation with the side information is selected as the transmitted dirty-paper codeword. This letter derives and verifies the analytical bit-error rate (BER) expression of binned orthogonal block code used as DPC in watermarking applications. The BER trends of such orthogonal DPC under constant code length or constant code rate constrains are analyzed. Finally, we propose a new class of DPC based on binned bi-orthogonal codes and demonstrate its BER performance gain over orthogonal DPC.
- Published
- 2009
14. Scheduling of corrugated paper production
- Author
-
Toshihide Ibaraki, Hiroyoshi Miwa, and Kazuki Matsumoto
- Subjects
Information Systems and Management ,business.product_category ,General Computer Science ,Corrugated fiberboard ,Scheduling (production processes) ,Management Science and Operations Research ,Industrial and Manufacturing Engineering ,Paper machine ,Production manager ,Modeling and Simulation ,Ordered set ,Multiobjective programming ,business ,Algorithm ,Integer programming ,Mathematics - Abstract
Corrugated paper is produced by gluing three types of papers of the same breadth. Given a set of orders, we first assign each order to one of the standard breadths, and then sequence those assigned to each standard breadth so that they are continuously manufactured from the three rolls of the specified standard breadth equipped in the machine called corrugator. Here we are asked to achieve multi-goals of minimizing total length of roll papers, total loss of papers caused by the differences between standard breadths and real breadths of the orders, and the number of machine stops needed during production. We use integer programming to assign orders to standard breadths, and then develop a special purpose algorithm to sequence the orders assigned to each standard breadth. This is a first attempt to handle scheduling problems of the corrugator machine.
- Published
- 2009
15. Modelling of frequency characteristics of the oil‐paper compound insulation based on the fractional calculus
- Author
-
Yong-ming Jing, Zong-en Li, Liu Xin, Guishu Liang, and Long Ma
- Subjects
010302 applied physics ,Polarity reversal ,Transformer oil ,Acoustics ,Frequency domain spectroscopy ,01 natural sciences ,Dielectric response ,Atomic and Molecular Physics, and Optics ,010305 fluids & plasmas ,Fractional calculus ,law.invention ,law ,Insulation system ,0103 physical sciences ,Broadband ,Condensed Matter::Strongly Correlated Electrons ,Electrical and Electronic Engineering ,Transformer ,Algorithm ,Mathematics - Abstract
The oil-paper compound insulation plays a vital role in the insulation structure of power transformer. To obtain the characteristics of the transformer's insulation system, it is of great importance to study the dielectric response of oil-paper compound insulation. In this study, fractional calculus is applied to model the oil-paper compound insulation. Both low-frequency and broadband high-frequency models are proposed and then they are verified by fitting the measured data of different insulation papers. Finally they are applied to two different cases: (i) the low-frequency model for fitting polarity reversal property; (ii) the broadband high-frequency model for studying the influence of the parameters variations on the frequency domain spectroscopy (FDS). The results demonstrate the advantages of the proposed models when compared with traditional ones.
- Published
- 2017
16. Polynomial Silent Self-Stabilizing p-Star Decomposition (Short Paper)
- Author
-
Mohammed Haddad, Colette Johnen, Sven Köhler, Graphes, AlgOrithmes et AppLications (GOAL), Laboratoire d'InfoRmatique en Image et Systèmes d'information (LIRIS), Institut National des Sciences Appliquées de Lyon (INSA Lyon), Université de Lyon-Institut National des Sciences Appliquées (INSA)-Université de Lyon-Institut National des Sciences Appliquées (INSA)-Centre National de la Recherche Scientifique (CNRS)-Université Claude Bernard Lyon 1 (UCBL), Université de Lyon-École Centrale de Lyon (ECL), Université de Lyon-Université Lumière - Lyon 2 (UL2)-Institut National des Sciences Appliquées de Lyon (INSA Lyon), Université de Lyon-Université Lumière - Lyon 2 (UL2), Laboratoire Bordelais de Recherche en Informatique (LaBRI), Université de Bordeaux (UB)-Centre National de la Recherche Scientifique (CNRS)-École Nationale Supérieure d'Électronique, Informatique et Radiocommunications de Bordeaux (ENSEIRB), University of Freiburg, Sustainability Center Freiburg, Germany, and ANR-10-IDEX-0003,IDEX BORDEAUX,Initiative d'excellence de l'Université de Bordeaux(2010)
- Subjects
Polynomial (hyperelastic model) ,020203 distributed computing ,Degree (graph theory) ,Round complexity ,Star (game theory) ,Short paper ,0102 computer and information sciences ,02 engineering and technology ,01 natural sciences ,Combinatorics ,010201 computation theory & mathematics ,Distributed algorithm ,0202 electrical engineering, electronic engineering, information engineering ,[INFO.INFO-DC]Computer Science [cs]/Distributed, Parallel, and Cluster Computing [cs.DC] ,Algorithm ,ComputingMilieux_MISCELLANEOUS ,Mathematics - Abstract
We present a silent self-stabilizing distributed algorithm computing a maximal p-star decomposition of the underlying communication network. Under the unfair distributed scheduler, the most general scheduler model, the algorithm converges in at most \(12\varDelta m + \mathcal {O}(m+n)\) moves, where m is the number of edges, n is the number of nodes, and \(\varDelta \) is the maximum node degree. Regarding the move complexity, our algorithm outperforms the previously known best algorithm by a factor of \(\varDelta \). While the round complexity for the previous algorithm was unknown, we show a \(5\left\lfloor \frac{n}{p+1} \right\rfloor +5\) bound for our algorithm.
- Published
- 2016
17. A close-to-capacity dirty paper coding scheme
- Author
-
S. ten Brink and Uri Erez
- Subjects
Theoretical computer science ,Quantization (signal processing) ,Detector ,Transmitter ,Vector quantization ,Data_CODINGANDINFORMATIONTHEORY ,Library and Information Sciences ,Interference (wave propagation) ,EXIT chart ,Precoding ,Computer Science Applications ,Combinatorics ,Channel capacity ,Single antenna interference cancellation ,Code (cryptography) ,Dirty paper coding ,Algorithm ,Decoding methods ,Computer Science::Information Theory ,Information Systems ,Mathematics ,Data compression - Abstract
The "writing on dirty paper"-channel model offers an information-theoretic framework for precoding techniques for canceling arbitrary interference known at the transmitter. It indicates that lossless precoding is theoretically possible at any signal-to-noise ratio (SNR), and thus dirty-paper coding may serve as a basic building block in both single-user and multiuser communication systems. We design an end-to-end coding realization of a system materializing a significant portion of the promised gains. We employ multidimensional quantization based on trellis shaping at the transmitter. Coset decoding is implemented at the receiver using "virtual bits." Combined with iterative decoding of capacity-approaching codes we achieve an improvement of 2dB over the best scalar quantization scheme. Code design is done using the EXIT chart technique.
- Published
- 2005
18. High SNR Analysis for MIMO Broadcast Channels: Dirty Paper Coding Versus Linear Precoding
- Author
-
Juyul Lee and Nihar Jindal
- Subjects
MIMO ,Data_CODINGANDINFORMATIONTHEORY ,Absolute difference ,Code rate ,Library and Information Sciences ,Precoding ,Computer Science Applications ,Channel capacity ,Signal-to-noise ratio (imaging) ,Control theory ,Dirty paper coding ,Algorithm ,Throughput (business) ,Computer Science::Information Theory ,Information Systems ,Mathematics - Abstract
In this correspondence, we compare the achievable throughput for the optimal strategy of dirty paper coding (DPC) to that achieved with suboptimal and lower complexity linear precoding techniques (zero-forcing and block diagonalization). Both strategies utilize all available spatial dimensions and therefore have the same multiplexing gain, but an absolute difference in terms of throughput does exist. The sum rate difference between the two strategies is analytically computed at asymptotically high SNR. Furthermore, the difference is not affected by asymmetric channel behavior when each user has a different average SNR. Weighted sum rate maximization is also considered. In the process, it is shown that allocating user powers in direct proportion to user weights asymptotically maximizes weighted sum rate.
- Published
- 2007
19. Practical Dirty Paper Coding With Sum Codes
- Author
-
Kiran M. Rege, Krishna Balachandran, M. Kemal Karakayali, and Joseph H. Kang
- Subjects
Block code ,021110 strategic, defence & security studies ,Theoretical computer science ,Concatenated error correction code ,0211 other engineering and technologies ,Variable-length code ,020206 networking & telecommunications ,Data_CODINGANDINFORMATIONTHEORY ,02 engineering and technology ,Linear code ,Systematic code ,Cyclic code ,0202 electrical engineering, electronic engineering, information engineering ,Constant-weight code ,Electrical and Electronic Engineering ,Low-density parity-check code ,Algorithm ,Computer Science::Information Theory ,Mathematics - Abstract
In this paper, we present a practical method to construct dirty paper coding (DPC) schemes using sum codes. Unlike the commonly used approach to DPC where the coding scheme involves concatenation of a channel code and a quantization code, the proposed method embodies a unified approach that emulates the binning method used in the proof of the DPC result. Auxiliary bits are used to create the desired number of code vectors in each bin. Sum codes are obtained when information sequences augmented with auxiliary bits are encoded using linear block codes. Sum-code-based DPC schemes can be implemented using any linear block code, and entail a relatively small increase in decoder complexity when compared to standard communication schemes. They can also lead to significant reduction in transmit power in comparison to standard schemes.
- Published
- 2016
20. A Review Paper on the Application of Deconvolution Technique in Well Test Analysis: Tal Block Pakistan Case Study
- Author
-
Zaheer Ahmed, Safwan Arshad, Jawad Ahmed, and Muhammad Tauqeer
- Subjects
Block (telecommunications) ,Well test analysis ,Deconvolution ,Algorithm ,Mathematics - Abstract
Deconvolution technique-which is based on nonlinear total least square (TLS) -is a mathematical tool that extracts the drawdown type curve from the rate and pressure history. This technique provides additional flow regime information that would not normally be seen within the specified time period of the buildup test through conventional analysis. It is an inverse process which aims to rebuild the theoretical reservoir pressure response during a single constant rate production period. This technique was adopted at different Tal Block wells in which conventional well test analysis didn't sight multiple no-flow boundaries due to small radius of investigation. A case is presented in this paper in which deconvolution was applied on a well, where reservoir pressure depletion effects were not matching during the test sequence with conventional analysis and the effect of negative superposition masked the boundary dominated flow, both of these issues were resolved using deconvolution technique. This technique have resulted in additional data/info of tested intervals which helped to resolve certain structural and reservoir uncertainties. However, application of this technique and yielded results should be considered/used keeping in view the assumptions involved in deconvolution, areas of its applicability and quality of data available. This paper presents principle, working, applications, limitations, and observations of deconvolution technique, available methods and a case study.
- Published
- 2017
21. The approximate capacity for the 3-receiver writing on random dirty paper channel
- Author
-
Stefano Rini and Shlomo Shamai
- Subjects
Gaussian ,020206 networking & telecommunications ,Data_CODINGANDINFORMATIONTHEORY ,02 engineering and technology ,State (functional analysis) ,Upper and lower bounds ,Superposition principle ,symbols.namesake ,Additive white Gaussian noise ,Transmission (telecommunications) ,0202 electrical engineering, electronic engineering, information engineering ,symbols ,Algorithm ,Realization (systems) ,Computer Science::Information Theory ,Mathematics ,Communication channel - Abstract
In this paper, the approximate capacity of the 3-receiver “writing on random dirty paper” (WRDP) channel is derived. In the M-receiver WRDP channel, the channel output is obtained as the sum of the channel input, white Gaussian noise and a channel state sequence randomly selected among a set of M independent Gaussian sequences. The transmitter has non-causal knowledge of the set of possible state sequences but does not know which one is selected to produce the channel output. In the following, we derive upper and lower bounds to the capacity of the 3-receiver WRDP channel which are to within a distance of at most 3 bits-per-channel-use (bpcu) for all channel parameters. In the achievability proof, the channel input is composed of the superposition of three codewords: the receiver opportunistically decodes a different set of codewords, depending on the variance of the channel state appearing in the channel output. Time-sharing among multiple transmission phases is employed to guarantee that transmitted message can be decoded regardless of the state realization. In the converse proof, we derive a novel outer bound which matches the pre-log coefficient arising in the achievability proof due to time-sharing. Although developed for the case of three possible state realizations, our results can be extended the general WRDP.
- Published
- 2017
- Full Text
- View/download PDF
22. An information-theoretic analysis of dirty paper coding for informed audio watermarking
- Author
-
Andrea Abrardo, Gianluigi Ferrari, Andrea Gorrieri, and Mauro Barni
- Subjects
Rocks ,Audio signal ,Signal to noise ratio ,Speech recognition ,Speech coding ,Watermark ,Acoustic distortion ,Data_CODINGANDINFORMATIONTHEORY ,Acoustics ,Watermarking ,Channel capacity ,Computer Science::Sound ,Distortion ,Computer Science::Multimedia ,Watermarking, Signal to noise ratio, Random variables, Acoustic distortion, Interference, Rocks, Acoustics ,Dirty paper coding ,Random variables ,Interference ,Algorithm ,Digital watermarking ,Computer Science::Information Theory ,Communication channel ,Mathematics - Abstract
Upon the introduction a simplified distortion model for a Gaussian audio watermaking scenario, we derive a lower bound for the Gelfand-Pinsker capacity of the watermark channel. Then, we use the capacity bound for designing an efficient practical watermarking scheme based on dirty trellis codes. The proposed information-theoretic framework is then validated, through the use of experimentally acquired audio signals, with the use of a recently proposed frequency-domain audio watermarking. Both an “ideal” (reflection-free) Gaussian acoustic channel and a realistic multipath acoustic channel are considered.
- Published
- 2014
23. Multiple input multiple output Dirty paper coding: System design and performance
- Author
-
Dinesh Rajan and Zouhair Al-qudah
- Subjects
Space–time block code ,Noise ,Convolutional code ,Electronic engineering ,Dirty paper coding ,Data_CODINGANDINFORMATIONTHEORY ,Low-density parity-check code ,Antenna diversity ,Algorithm ,Decoding methods ,Coding gain ,Computer Science::Information Theory ,Mathematics - Abstract
In this paper, a practical multiple input multiple output Dirty paper coding (MIMO-DPC) scheme is designed to cancel the effect of additive interference that is known perfectly to the transmitter. The proposed system uses a trio of encoders — a LDPC code, a vector quantizer implemented as a convolutional decoder and an Orthogonal space time block code (STBC) to achieve temporal coding gain, interference cancelation and spatial diversity, respectively. First, we derive the equivalent noise seen by the receiver using an equivalent lattice based dirty paper code. Then the optimal value of the power inflation factor, which is one of the key system parameters used to minimize the equivalent noise seen by the receiver is derived. Furthermore, we analytically prove that the equivalent noise seen by the receiver tends to 0 for large number of receive antennas. Performance results in the case of various number of receiver antennas are presented and show that significant reduction in bit error probabilities can be obtained over a system that uses no interference cancelation.
- Published
- 2012
24. 59.4L:Late-News Paper: The Biprimary Color System for E-Paper: Doubling Color Performance Compared to RGBW
- Author
-
Jason Heikenfeld, Nathan Smith, Laura Kramer, Sarah Norman, Claire Topping, Qin Liu, Mark Goulding, and Sayantika Mukherjee
- Subjects
Optics ,Pixel ,business.industry ,media_common.quotation_subject ,Contrast (vision) ,business ,Algorithm ,Mathematics ,media_common - Abstract
We demonstrate the “biprimary” color system with dual particle electrophoretic dispersions and a 3-electrode system. Preliminary contrast ratios reach 10:1. Furthermore an electrokinetic cell is demonstrated confirming basic functionality for EKD panels. A theoretical pixel simulation confirms biprimary doubles the color performance compared to RGBW pixels.
- Published
- 2014
25. Near-capacity dirty-paper code design : a source-channel coding approach
- Author
-
Vladimir Stankovic, A.D. Liveris, Yang Yang, Yong Sun, and Zixiang Xiong
- Subjects
Source code ,Theoretical computer science ,media_common.quotation_subject ,TK ,Variable-length code ,Data_CODINGANDINFORMATIONTHEORY ,Code rate ,Library and Information Sciences ,EXIT chart ,Computer Science Applications ,Systematic code ,Shannon–Fano coding ,Turbo code ,Dirty paper coding ,Algorithm ,Information Systems ,media_common ,Mathematics - Abstract
This paper examines near-capacity dirty-paper code designs based on source-channel coding. We first point out that the performance loss in signal-to-noise ratio (SNR) in our code designs can be broken into the sum of the packing loss from channel coding and a modulo loss, which is a function of the granular loss from source coding and the target dirty-paper coding rate (or SNR). We then examine practical designs by combining trellis-coded quantization (TCQ) with both systematic and nonsystematic irregular repeat-accumulate (IRA) codes. Like previous approaches, we exploit the extrinsic information transfer (EXIT) chart technique for capacity-approaching IRA code design; but unlike previous approaches, we emphasize the role of strong source coding to achieve as much granular gain as possible using TCQ. Instead of systematic doping, we employ two relatively shifted TCQ codebooks, where the shift is optimized (via tuning the EXIT charts) to facilitate the IRA code design. Our designs synergistically combine TCQ with IRA codes so that they work together as well as they do individually. By bringing together TCQ (the best quantizer from the source coding community) and EXIT chart-based IRA code designs (the best from the channel coding community), we are able to approach the theoretical limit of dirty-paper coding. For example, at 0.25 bit per symbol (b/s), our best code design (with 2048-state TCQ) performs only 0.630 dB away from the Shannon capacity.
- Published
- 2009
26. LDPC-LDGM Based Dirty Paper Coding Techniques for Multicell Cooperative Communication System
- Author
-
Haitao Li
- Subjects
Computational complexity theory ,Quantization (signal processing) ,MIMO ,Electronic engineering ,Dirty paper coding ,Data_CODINGANDINFORMATIONTHEORY ,Transmission system ,Low-density parity-check code ,Communications system ,Algorithm ,Decoding methods ,Computer Science::Information Theory ,Mathematics - Abstract
The vector dirty paper coding (DPC) can achieve rate region for multicell cooperation system. Conventional DPC implementation approach suffer the capacity loss due to quantization which lead to modulo loss. In this paper, we proposed a vector DPC transmission scheme with non-binary LDPC codes and low density generator matrix based quantizer for multicell processing. And the random coding based proof for the encoding and decoding transmission system is presented. It can achieve a shaping gain and significantly reduce computational complexity.
- Published
- 2009
27. Dirty Paper Coding for Fading Channels with Partial Transmitter Side Information
- Author
-
Chinmay S. Vaze and Mahesh K. Varanasi
- Subjects
Beamforming ,FOS: Computer and information sciences ,business.industry ,Computer Science - Information Theory ,Information Theory (cs.IT) ,05 social sciences ,MIMO ,Transmitter ,050801 communication & media studies ,020206 networking & telecommunications ,02 engineering and technology ,0508 media and communications ,Signal-to-noise ratio ,Transmission (telecommunications) ,0202 electrical engineering, electronic engineering, information engineering ,Fading ,Dirty paper coding ,Telecommunications ,business ,Algorithm ,Random variable ,Mathematics ,Computer Science::Information Theory - Abstract
The problem of Dirty Paper Coding (DPC) over the Fading Dirty Paper Channel (FDPC) Y = H(X + S)+Z, a more general version of Costa's channel, is studied for the case in which there is partial and perfect knowledge of the fading process H at the transmitter (CSIT) and the receiver (CSIR), respectively. A key step in this problem is to determine the optimal inflation factor (under Costa's choice of auxiliary random variable) when there is only partial CSIT. Towards this end, two iterative numerical algorithms are proposed. Both of these algorithms are seen to yield a good choice for the inflation factor. Finally, the high-SNR (signal-to-noise ratio) behavior of the achievable rate over the FDPC is dealt with. It is proved that FDPC (with t transmit and r receive antennas) achieves the largest possible scaling factor of min(t,r) log SNR even with no CSIT. Furthermore, in the high SNR regime, the optimality of Costa's choice of auxiliary random variable is established even when there is partial (or no) CSIT in the special case of FDPC with t, 5 pages with 2 figures, presented at 42nd Asilomar Conference on Signals, Systems, and Computers, Pacific Grove, USA, Oct. 2008
- Published
- 2009
- Full Text
- View/download PDF
28. THE UNIFED-FFT GRID TOTALIZING ALGORITHM FOR FAST O(N LOG N) METHOD OF MOMENTS ELECTROMAGNETIC ANALYSIS WITH ACCURACY TO MACHINE PRECISION (Invited Paper)
- Author
-
Vladimir Okhmatovski, Jay Kyoon Lee, and Brian J. Rautio
- Subjects
Reduction (complexity) ,Matrix (mathematics) ,Radiation ,Fast Fourier transform ,Derivative ,Electrical and Electronic Engineering ,Method of moments (statistics) ,Condensed Matter Physics ,Grid ,Time complexity ,Algorithm ,Machine epsilon ,Mathematics - Abstract
(Invited Paper) Abstract—While considerable progress has been made in the realm of speed-enhanced electromagnetic (EM) solvers, these fast solvers generally achieve their results through methods that introduce additional error components by way of geometric type approximations, sparse-matrix type approximations, multilevel type decomposition of interactions, and assumptions regarding the stochastic nature of EM problems. This work introduces the O(N log N ) Unified-FFT grid totalizing (UFFT-GT) method, a derivative of method of moments (MoM), which achieves fast analysis with minimal to zero reduction in accuracy relative to direct MoM solution. The method uniquely combines FFT-enhanced Matrix Fill Operations (MFO) that are calculated to machine precision with FFT-enhanced Matrix Solve Operations (MSO) that are also calculated to machine precision, for an expedient solution that does not compromise accuracy.
- Published
- 2015
29. Weighted Stochastic Gradient Identification Algorithms for ARX models** This paper was supported by Specialized Research Fund for the Doctoral Program of Higher Education under Grant No. 20132302110053
- Author
-
Ai-Guo Wu, Rui-Qi Dong, and Fang-Zhou Fu
- Subjects
Identification (information) ,Current (mathematics) ,Control and Systems Engineering ,Estimation theory ,Convergence (routing) ,Algorithm ,Mathematics ,Weighting ,Term (time) - Abstract
In this paper, weighted stochastic gradient (WSG) algorithms for ARX models are proposed by modifying the standard stochastic gradient identification algorithms. In the proposed algorithms, the correction term is a weighted term of the correction terms of the standard SG algorithm in the current and last recursive steps. In addition, a latest estimation based WSG (LE-WSG) algorithm is also established. The convergence performance of the proposed LE-WSG algorithm is then analyzed. It is shown by a numerical example that both the WSG and LE-WSG algorithms can possess faster convergence speed and higher convergence precision compared with the standard SG algorithms if the weighting factor is appropriately chosen.
- Published
- 2015
30. Tight BER bound of binned orthogonal and quasi-orthogonal dirty-paper codes
- Author
-
Xiaotian Xu and Yong Liang Guan
- Subjects
Block code ,Theoretical computer science ,Gold code ,Data_CODINGANDINFORMATIONTHEORY ,Linear code ,Dirty paper ,Computer Science::Performance ,Computer Science::Networking and Internet Architecture ,Turbo code ,Bit error rate ,Side information ,Algorithm ,Computer Science::Information Theory ,Mathematics ,Coding (social sciences) - Abstract
Existing dirty-paper coding (DPC) schemes based on binned FEC codes lack of tractable BER analysis, hence it is difficult to predict the performance behavior of DPC. In this paper, we derive closed-form BER upper bounds of binned Walsh-Hadamard codes and binned Gold codes as DPC. Their BER upper bounds are verified to be very tight under the condition that the channel contains side information only, or side information plus noise.
- Published
- 2008
31. Withdrawn Paper
- Author
-
Z. Tangxiaotao and D.-C. Gong
- Subjects
Transformation (function) ,Mathematical analysis ,Rational function ,Algorithm ,Image based ,Mathematics ,Image (mathematics) - Abstract
delete
- Published
- 2018
32. A supplement to the paper of Zayed et al. [Optik, 170 (2018) 339–341]
- Author
-
İsmail Aslan and Izmir Institute of Technology. Mathematics
- Subjects
Auxiliary equation ,Exact solution ,Khater method ,Direct method ,Substitution (logic) ,Point (geometry) ,Electrical and Electronic Engineering ,Nonlinear evolution equation ,Trial and error ,Algorithm ,Atomic and Molecular Physics, and Optics ,Electronic, Optical and Magnetic Materials ,Mathematics - Abstract
It seems that the results obtained by the so-called Khater method contain computational or print errors. We look at this issue from a different point of a view, namely, from a theoretical side. We prove our claim by a formal direct approach instead of back substitution (trial and error) approach.
- Published
- 2019
33. On the Capacity of the Carbon Copy onto Dirty Paper Channel
- Author
-
Shlomo Shamai Shitz and Stefano Rini
- Subjects
Independent and identically distributed random variables ,FOS: Computer and information sciences ,Computer Science - Information Theory ,02 engineering and technology ,Library and Information Sciences ,01 natural sciences ,Precoding ,010305 fluids & plasmas ,Channel capacity ,symbols.namesake ,0103 physical sciences ,0202 electrical engineering, electronic engineering, information engineering ,Fading ,Mathematics ,Channel use ,Computer Science::Information Theory ,business.industry ,Information Theory (cs.IT) ,020206 networking & telecommunications ,Computer Science Applications ,Additive white Gaussian noise ,Gaussian noise ,symbols ,Telecommunications ,business ,Algorithm ,Information Systems ,Communication channel - Abstract
The "Carbon Copy onto Dirty Paper" (CCDP) channel is the compound "writing on dirty paper" channel in which the channel output is obtained as the sum of the channel input, white Gaussian noise and a Gaussian state sequence randomly selected among a set possible realizations. The transmitter has non-causal knowledge of the set of possible state sequences but does not know which sequence is selected to produce the channel output. We study the capacity of the CCDP channel for two scenarios: (i) the state sequences are independent and identically distributed, and (ii) the state sequences are scaled versions of the same sequence. In the first scenario, we show that a combination of superposition coding, time-sharing and Gel'fand-Pinsker binning is sufficient to approach the capacity to within three bits per channel use for any number of possible state realizations. In the second scenario, we derive capacity to within four bits-per-channel-use for the case of two possible state sequences. This result is extended to the CCDP channel with any number of possible state sequences under certain conditions on the scaling parameters which we denote as "strong fading" regime. We conclude by providing some remarks on the capacity of the CCDP channel in which the state sequences have any jointly Gaussian distribution.
- Published
- 2017
- Full Text
- View/download PDF
34. Dirty paper coding using sign-bit shaping and LDPC codes
- Author
-
Andrew Thangaraj, Srikrishna Bhashyam, and G Shilpa
- Subjects
FOS: Computer and information sciences ,Theoretical computer science ,Information Theory (cs.IT) ,Computer Science - Information Theory ,Variable-length code ,Data_CODINGANDINFORMATIONTHEORY ,Convolutional code ,Dirty paper coding ,Forward error correction ,Low-density parity-check code ,Error detection and correction ,Algorithm ,Decoding methods ,Computer Science::Information Theory ,Mathematics ,Parity bit - Abstract
Dirty paper coding (DPC) refers to methods for pre-subtraction of known interference at the transmitter of a multiuser communication system. There are numerous applications for DPC, including coding for broadcast channels. Recently, lattice-based coding techniques have provided several designs for DPC. In lattice-based DPC, there are two codes - a convolutional code that defines a lattice used for shaping and an error correction code used for channel coding. Several specific designs have been reported in the recent literature using convolutional and graph-based codes for capacity-approaching shaping and coding gains. In most of the reported designs, either the encoder works on a joint trellis of shaping and channel codes or the decoder requires iterations between the shaping and channel decoders. This results in high complexity of implementation. In this work, we present a lattice-based DPC scheme that provides good shaping and coding gains with moderate complexity at both the encoder and the decoder. We use a convolutional code for sign-bit shaping, and a low-density parity check (LDPC) code for channel coding. The crucial idea is the introduction of a one-codeword delay and careful parsing of the bits at the transmitter, which enable an LDPC decoder to be run first at the receiver. This provides gains without the need for iterations between the shaping and channel decoders. Simulation results confirm that at high rates the proposed DPC method performs close to capacity with moderate complexity. As an application of the proposed DPC method, we show a design for superposition coding that provides rates better than time-sharing over a Gaussian broadcast channel., 5 pages, submitted to ISIT 2010
- Published
- 2010
35. Lattice coding for the vector fading paper problem
- Author
-
Pin-Hsun Lin, Hsuan-Jung Su, and Shih-Chun Lin
- Subjects
business.industry ,ComputerSystemsOrganization_COMPUTER-COMMUNICATIONNETWORKS ,MIMO ,Codebook ,Data_CODINGANDINFORMATIONTHEORY ,Precoding ,Channel capacity ,Channel state information ,Fading ,Dirty paper coding ,Telecommunications ,business ,Algorithm ,Computer Science::Information Theory ,Mathematics ,Communication channel - Abstract
Dirty paper coding (DPC) is a promising preceding technique for canceling arbitrary interference known only at the transmitter. The interference-free rate is thus achieved. However, this result relies on the perfectly known channel coefficients at the transmitter when being applied to the fading channel. We thus consider the fading paper problem where only the channel statistics information is available at the transmitter. In general, the optimal transmission strategies to achieve the capacity of this channel is still unknown. We confine ourselves to the linear-assignment Gel'fand-Pinsker coding which has been proved to have good, sometimes even optimal, performance in the a variety of fast and slow fading channels. However, lack of structured codebook so far limited the practical applications of this coding. In this paper, we present a lattice-based coding structure for the vector fading paper channel. It can achieve the rate performance of the linear-assignment strategies previously proved by the random Gaussian codebook. Moreover, the lattice codebook has an algebraic structure and is possible to be implemented in practice. The results can be applied to the emerging fields such as fading multiple-input multiple-output (MIMO) Gaussian broadcast channels or fading MIMO cognitive channels.
- Published
- 2007
36. Analysis and Design of Dirty Paper Coding by Transformation of Noise
- Author
-
Sae-Young Chung and Young-Seung Lee
- Subjects
Block code ,Concatenated error correction code ,Data_CODINGANDINFORMATIONTHEORY ,Binary erasure channel ,Linear code ,symbols.namesake ,Additive white Gaussian noise ,Electronic engineering ,symbols ,Dirty paper coding ,Low-density parity-check code ,Algorithm ,Decoding methods ,Computer Science::Information Theory ,Mathematics - Abstract
We design a coding scheme for Costa's dirty paper coding (DPC) (M.H.M. Costa, 1983) using a channel and a shaping code. We show that by transforming the channel noise distribution the DPC channel can be converted into the binary erasure channel (BEC) with binary interference with memory. Furthermore, the messages exchanged during the iterative decoding between the channel and shaping codes become one dimensional under the new model. We analyze the iterative decoding and find good shaping and channel code pairs using some closed-form extrinsic information transfer (EXIT) curves. We verify that our dirty paper codes designed using this method are also good for the original DPC channel with the additive white Gaussian noise (AWGN) and arbitrary interference. Our implementation of DPC uses short block codes such as repetition codes for shaping codes. Although the shaping gains of such codes are not very high, they may provide a better complexity-performance trade-off for simple practical implementations of DPC. Furthermore, we show accurate theoretical analysis is possible for such codes under our channel model.
- Published
- 2007
37. The Semi-Automatic Restoration of Regular Paper Fragments
- Author
-
Zhong Qiu Ding
- Subjects
Matrix (mathematics) ,Vector space model ,Contrast (statistics) ,General Medicine ,Construct (python library) ,Enhanced Data Rates for GSM Evolution ,Semi automatic ,MATLAB ,Blank ,computer ,Algorithm ,computer.programming_language ,Mathematics - Abstract
The restoration of regular paper fragments can be solved by edge matching. Edge matching is based primarily on the length and location of the break in writing, however, it is possible that the edges of some regular paper fragments are blank. And so the error of restoration is the inevitable. In this case, it needs manual intervention based on the article content and then eliminate errors. Prior to the establishment of a mathematical model, there is a need for binarization of regular paper fragments with Matlab, then establish vector space model and construct edge contrast matrix, get the order after the Q cluster analysis of fragments. Finally the original picturecan be obtained through splicing and restoring in the order.
- Published
- 2014
38. The Calibration Techniques of Paper Basis Weight Sensor
- Author
-
Wei Tang, Lian Hua Hu, and Xin Ping Li
- Subjects
Electronic engineering ,General Medicine ,Air gap (plumbing) ,Algorithm ,Mathematics - Abstract
This paper deals with the study of factors which are affecting basis weight sensor measuring accuracy, the Z distance, XY dislocation, air gap temperature, dust and paper grades. The calibration techniques and solving methods of accurately measuring are given. Automatic calibration and range division are combined to improve the accuracy of basis weight sensor, and the BW measuring precision is reasonable. Static accuracy (2σ) is 0.20% and dynamic accuracy (2σ) is 0.25% at the range of 10 g/m2-1500 g/m2.
- Published
- 2014
39. The Auto Restoration of Paper Fragments Rules Based on the Traveling Salesman Model
- Author
-
Ao Jun Zhou
- Subjects
Mathematical optimization ,Line (geometry) ,Genetic algorithm ,General Medicine ,2-opt ,Greedy algorithm ,Cluster analysis ,Projection (set theory) ,Bottleneck traveling salesman problem ,Travelling salesman problem ,Algorithm ,Mathematics - Abstract
The restoration of regular paper fragments can be solved just as the Traveling Salesman Problem. Paper pretreatment and edge matching are the main steps during the restoration process, which contain binarization, screening between the left and right boundary depending the width of paper margins. For some longer paper fragments, the transverse splicing can be realized with Greed Algorithm. But the algorithm will get a non-ideal restoration for more fine paper fragments. Therefore the noise-suppressed processing before clustering from line to line through projection is essential in order to make sure that the regular English fragments can get the regularity like Chinese characters. The two experiments get an 100% correct rates when using the method combining the improved genetic algorithm for traveling salesman problem.combining the improved genetic algorithm for traveling salesman problem.
- Published
- 2014
40. Dirty Paper Coding for the MIMO cognitive radio channel with imperfect CSIT
- Author
-
Chinmay S. Vaze and Mahesh K. Varanasi
- Subjects
FOS: Computer and information sciences ,Mathematical optimization ,Iterative method ,Information Theory (cs.IT) ,Computer Science - Information Theory ,05 social sciences ,Transmitter ,MIMO ,050801 communication & media studies ,020206 networking & telecommunications ,02 engineering and technology ,0508 media and communications ,Cognitive radio ,Signal-to-noise ratio ,Transmission (telecommunications) ,0202 electrical engineering, electronic engineering, information engineering ,Dirty paper coding ,Algorithm ,Computer Science::Information Theory ,Communication channel ,Mathematics - Abstract
A Dirty Paper Coding (DPC) based transmission scheme for the Gaussian multiple-input multiple-output (MIMO) cognitive radio channel (CRC) is studied when there is imperfect and perfect channel knowledge at the transmitters (CSIT) and the receivers, respectively. In particular, the problem of optimizing the sum-rate of the MIMO CRC over the transmit covariance matrices is dealt with. Such an optimization, under the DPC-based transmission strategy, needs to be performed jointly with an optimization over the inflation factor. To this end, first the problem of determination of inflation factor over the MIMO channel $Y=H_1 X + H_2 S + Z$ with imperfect CSIT is investigated. For this problem, two iterative algorithms, which generalize the corresponding algorithms proposed for the channel $Y=H(X+S)+Z$, are developed. Later, the necessary conditions for maximizing the sum-rate of the MIMO CRC over the transmit covariances for a given choice of inflation factor are derived. Using these necessary conditions and the algorithms for the determination of the inflation factor, an iterative, numerical algorithm for the joint optimization is proposed. Some interesting observations are made from the numerical results obtained from the algorithm. Furthermore, the high-SNR sum-rate scaling factor achievable over the CRC with imperfect CSIT is obtained., To be presented at ISIT 2009, Seoul, S. Korea
- Published
- 2009
41. Dirty Paper Coding with a Finite Input Alphabet
- Author
-
Tal Gariby, Uri Erez, and Shlomo Shamai
- Subjects
MIMO ,Data_CODINGANDINFORMATIONTHEORY ,symbols.namesake ,Intersymbol interference ,Gaussian noise ,symbols ,Electronic engineering ,Dirty paper coding ,Random variable ,Algorithm ,Digital watermarking ,Computer Science::Information Theory ,Constellation ,Coding (social sciences) ,Mathematics - Abstract
We study a dirty paper channel model where the input is constrained to belong to a PAM constellation. In particular, we provide lower bounds on the capacity as well as explicit coding schemes for the binary-input dirty-paper channel. We examine the case of causal as well as non-causal side information.
- Published
- 2006
42. Peak To Average Power Ratio Reduction for Multicarrier Systems Using Dirty Paper Coding
- Author
-
Pin-Hsun Lin, Hsuan-Jung Su, Hsuan-Tien Liu, and Shih-Chun Lin
- Subjects
Orthogonal frequency-division multiplexing ,business.industry ,Real-time computing ,Power (physics) ,Constraint (information theory) ,Reduction (complexity) ,Signal-to-noise ratio ,Transmission (telecommunications) ,Encoding (memory) ,Power ratio ,Bit error rate ,Dirty paper coding ,Electrical and Electronic Engineering ,Telecommunications ,business ,Algorithm ,Mathematics - Abstract
In this paper, we improve the peak to average power ratio (PAPR) reduction scheme for multicarrier systems proposed by Collings and Clarkson by applying dirty paper coding with peak power constraint. We compare the bit error rate (BER) performance among conventional orthogonal frequency division multiplexing (OFDM), Collings and Clarkson’s method, and our proposed scheme with bit loading. From simulation we find that when channel coding is considered, Collings and Clarkson’s method is the worst independent of the number of bits loaded. The proposed method performs the best when the number of bits is large and hence is suitable for high speed transmission.
- Published
- 2006
43. Structured Dirty Paper Coding with Known Interference Structure at Receiver
- Author
-
Sumit Roy, Hui Liu, and Bin Liu
- Subjects
Modulation ,Transmitter ,Electronic engineering ,Dirty paper coding ,Binary code ,Interference (wave propagation) ,Algorithm ,Precoding ,Electromagnetic interference ,Phase-shift keying ,Mathematics - Abstract
The Tomlinson-Harashima precoding is well known for dirty paper coding implementation. Despite its simplicity, THP suffers from a significant performance loss in the low SNR region due to modulo operations. In this paper, we propose new dirty paper precoding scheme by taking advantage of the known modulation structure of interference (e.g., BPSK and QPSK signals). The new method, termed structured DPC (SDPC), outperforms the regular THP with modest changes to the transmitter and receiver. For BPSK and QPSK cases investigated, the SDPC only suffers power loss, which is up to 1.25 dB compared with non-interference case, while the regular THP-based scalar dirty paper coding has a typical 4-5 dB capacity loss in the same low SNR regions
- Published
- 2006
44. Image data hiding based on capacity-approaching dirty-paper coding
- Author
-
Yang Yang, Yong Sun, Vladimir Stankovic, and Zixiang Xiong
- Subjects
Theoretical computer science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Image processing ,Lossy compression ,symbols.namesake ,Gaussian noise ,Information hiding ,Discrete cosine transform ,Image scaling ,symbols ,Dirty paper coding ,Digital watermarking ,Algorithm ,Mathematics - Abstract
We present an image data-hiding scheme based on near-capacity dirty-paper codes. The scheme achieves high embedding rates by "hiding" information into mid-frequency DCT coefficients among each DCT block of the host image. To reduce the perceptual distortion due to data-hiding, the mid-frequency DCT coefficients are first perceptually scaled according to Watson's model. Then a rate-1/3 projection matrix in conjunction with a rate-1/5 capacity-approaching dirty-paper code is applied. We are able to embed 1500 information bits into 256×256 images, outperforming, under a Gaussian noise attack, currently the best known data-hiding scheme by 33%. Robustness tests against different attacks, such as low-pass filtering, image scaling, and lossy compression, show that our scheme is a good candidate for high-rate image data-hiding applications.
- Published
- 2006
45. Soft measurement of paper smoothness based on time–frequency analysis of paper quantization noise
- Author
-
Y.L. Tan, P. Ren, and Q. Zhou
- Subjects
Soft computing ,Radial basis function neural ,Applied Mathematics ,System of measurement ,Quantization (signal processing) ,Electronic engineering ,Electrical and Electronic Engineering ,Condensed Matter Physics ,Instrumentation ,Algorithm ,Mathematics ,Time–frequency analysis - Abstract
Aiming at the problem that paper smoothness cannot be measured on line, a new method for soft measurement of paper smoothness is put forward on the basis of studying paper quantization distribution. A soft measurement system is established, by which, paper quantization noise is processed by time–frequency analysis and non-linear transformation, so that online paper smoothness can be obtained through a Radial Basis Function Neural Networks (RBFNNs). The results show that this soft computing method is a feasible way with high measuring accuracy.
- Published
- 2012
46. On robust dirty paper coding
- Author
-
Uri Erez and Anatoly Khina
- Subjects
business.industry ,Transmitter ,symbols.namesake ,Intersymbol interference ,Single antenna interference cancellation ,Gaussian noise ,Robustness (computer science) ,symbols ,Dirty paper coding ,Telecommunications ,business ,Algorithm ,Decoding methods ,Computer Science::Information Theory ,Coding (social sciences) ,Mathematics - Abstract
A dirty paper channel is considered, where the transmitter knows the interference sequence up to a constant multiplicative factor, known only to the receiver. We derive lower bounds on the achievable rate of communication by proposing a coding scheme that partially compensates for the imprecise channel knowledge.We focus on a communication scenario where the Gaussian noise is small while the interference is strong. Our approach is based on analyzing the performance achievable using extended Tomlinson-Harashima like coding schemes. When the power of the interference is finite, we show that this may be achieved by a judicious choice of the scaling parameter at the receiver. We further show that the communication rate may be improved, for finite as well as infinite interference power, by allowing randomized scaling at the transmitter.
- Published
- 2008
47. Discussion of the Paper 'Asymptotic Theory of Outlier Detection Algorithms for Linear Time Series Regression Models' by Johansen & Nielsen
- Author
-
Elvezio Ronchetti
- Subjects
Statistics and Probability ,Asymptotic analysis ,Series (mathematics) ,ddc:310 ,Regression analysis ,Anomaly detection ,Statistics, Probability and Uncertainty ,Time complexity ,Algorithm ,Mathematics - Abstract
Discussion of the paper by Johansen & Nielsen
- Published
- 2016
48. Discussion of the Paper 'Asymptotic Theory of Outlier Detection Algorithms for Linear Time Series Regression Models' by Søren Johansen and Bent Nielsen
- Author
-
Silvelyn Zwanzig
- Subjects
Statistics and Probability ,Asymptotic analysis ,Series (mathematics) ,05 social sciences ,Bent molecular geometry ,Regression analysis ,01 natural sciences ,010104 statistics & probability ,0502 economics and business ,Anomaly detection ,0101 mathematics ,Statistics, Probability and Uncertainty ,Time complexity ,Algorithm ,050205 econometrics ,Mathematics - Abstract
Discussion of the Paper "Asymptotic Theory of Outlier Detection Algorithms for Linear Time Series Regression Models" by Soren Johansen and Bent Nielsen
- Published
- 2016
49. A technical note on the paper 'hGA: Hybrid genetic algorithm in fuzzy rule-based classification systems for high-dimensional problems'
- Author
-
Shahab Derhami and Alice E. Smith
- Subjects
Mathematical optimization ,021103 operations research ,Fuzzy classification ,Fuzzy rule ,0211 other engineering and technologies ,02 engineering and technology ,Fuzzy logic ,Set (abstract data type) ,Genetic algorithm ,Genetic fuzzy systems ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Integer programming ,Algorithm ,Software ,Statistical hypothesis testing ,Mathematics - Abstract
This paper provides a corrected formulation to the mixed integer programming model proposed by Aydogan et al. (2012) 1. They proposed a genetic algorithm to learn fuzzy rules for a fuzzy rule-based classification system and developed a Mixed Integer Programming model (MIP) to prune the generated rules by selecting the best set of rules to maximize predictive accuracy. However, their proposed MIP formulation contains errors, which are described in this technical note. We develop corrections and improvements to the original formulation and test it with non-parametric statistical tests on the same data sets used to evaluate the original model. The statistical analysis shows that the results of the correction formulation are significantly different from the original model.
- Published
- 2016
50. Superposition coding for Gaussian dirty paper
- Author
-
Shlomo Shamai, David Burshtein, Amir Bennatan, and Giuseppe Caire
- Subjects
Theoretical computer science ,Channel (digital image) ,Generalization ,Gaussian ,List decoding ,Data_CODINGANDINFORMATIONTHEORY ,Dirty paper ,symbols.namesake ,Gaussian channels ,symbols ,Superposition coding ,Algorithm ,Decoding methods ,Computer Science::Information Theory ,Mathematics - Abstract
We present practical codes designed for the Gaussian dirty paper channel. We show that the dirty paper decoding problem can be transformed into an equivalent multiple-access problem, for which we apply superposition coding. Our approach is a generalization of the nested lattices approach of Zamir et al. (2002). We present simulation results which confirm the effectiveness of our methods.
- Published
- 2004
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.