12,062 results
Search Results
2. Torn-Paper Coding.
- Author
-
Shomorony, Ilan and Vahid, Alireza
- Subjects
- *
SEQUENTIAL analysis , *DATA warehousing - Abstract
We consider the problem of communicating over a channel that randomly “tears” the message block into small pieces of different sizes and shuffles them. For the binary torn-paper channel with block length $n$ and pieces of length ${\mathrm{ Geometric}}(p_{n})$ , we characterize the capacity as $C = e^{-\alpha }$ , where $\alpha = \lim _{n\to \infty } p_{n} \log n$. Our results show that the case of ${\mathrm{ Geometric}}(p_{n})$ -length fragments and the case of deterministic length- $(1/p_{n})$ fragments are qualitatively different and, surprisingly, the capacity of the former is larger. Intuitively, this is due to the fact that, in the random fragments case, large fragments are sometimes observed, which boosts the capacity. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
3. Paper Bodies: Data and Embodiment in the Sisterhood of Slade's Commonplace Books.
- Author
-
Hess, Jillian M.
- Subjects
- *
COMMONPLACE-books , *ROMANTICISM , *ENCODING , *PARATEXT , *ARCHIVES - Abstract
The article introduces a small bit of data for Romanticists' consideration, a collection of seventeen commonplace books kept from 1814 to 1817. It explores how Mary and Sarah Leigh and their cousin Maria Leigh used their commonplace books as archives of shared intimacy. The strategies the Leigh sisters used to encode embodied data include linking immaterial ideas with the materiality of the notebook and paratext that teaches how to read the verse in the context of the sisters' lived experience.
- Published
- 2022
- Full Text
- View/download PDF
4. A commentary on the NIMA paper by J. Brennan et al. on the demonstration of two-dimensional time encoded imaging of fast neutrons.
- Author
-
Wehe, David
- Subjects
- *
FAST neutrons , *ENCODING , *ARMS control - Published
- 2024
- Full Text
- View/download PDF
5. On the Capacity of the Carbon Copy onto Dirty Paper Channel.
- Author
-
Rini, Stefano and Shamai Shitz, Shlomo
- Subjects
- *
RADIO transmitter fading , *TRANSMITTERS (Communication) , *QUASISTATIC processes , *RANDOM noise theory , *ENCODING - Abstract
The “carbon copy onto dirty paper” (CCDP) channel is the compound “writing on dirty paper” channel in which the channel output is obtained as the sum of the channel input, white Gaussian noise and a Gaussian state sequence randomly selected among a set possible realizations. The transmitter has non-causal knowledge of the set of possible state sequences but does not know which sequence is selected to produce the channel output. We study the capacity of the CCDP channel for two scenarios: 1) the state sequences are independent and identically distributed; and 2) the state sequences are scaled versions of the same sequence. In the first scenario, we show that a combination of superposition coding, time-sharing, and Gel’fand-Pinsker binning is sufficient to approach the capacity to within 3 bits per channel use for any number of possible state realizations. In the second scenario, we derive capacity to within 4 bits per channel use for the case of two possible state sequences. This result is extended to the CCDP channel with any number of possible state sequences under certain conditions on the scaling parameters, which we denote as “strong fading” regime. We conclude by providing some remarks on the capacity of the CCDP channel in which the state sequences have any jointly Gaussian distribution. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
6. Proper multi-layer coding in fading dirty-paper channel.
- Author
-
Hoseini, Sayed Ali Khodam and Akhlaghi, Soroush
- Subjects
- *
CHANNEL coding , *RADIO transmitter fading , *ADDITIVE white Gaussian noise channels , *RADIO transmitters & transmission , *ENCODING - Abstract
This study investigates multi-layer coding over a dirty-paper channel. First, it is demonstrated that superposition coding in such channel still achieves the capacity of interference-free additive white Gaussian noise channel when the transmitter is non-causally aware of interference signal. Then, the problem is extended to the dirty-paper block fading channel, where it is shown that in the lack of channel information at the transmitter, the so-called broadcast approach maximises the average achievable rate of such channel. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
7. Ergodic Fading MIMO Dirty Paper and Broadcast Channels: Capacity Bounds and Lattice Strategies.
- Author
-
Hindy, Ahmed and Nosratinia, Aria
- Abstract
A multiple-input multiple-output (MIMO) version of the dirty paper channel is studied, where the channel input and the dirt experience the same fading process, and the fading channel state is known at the receiver. This represents settings where signal and interference sources are co-located, such as in the broadcast channel. First, a variant of Costa’s dirty paper coding is presented, whose achievable rates are within a constant gap to capacity for all signal and dirt powers. In addition, a lattice coding and decoding scheme is proposed, whose decision regions are independent of the channel realizations. Under Rayleigh fading, the gap to capacity of the lattice coding scheme vanishes with the number of receive antennas, even at finite Signal-to-Noise Ratio (SNR). Thus, although the capacity of the fading dirty paper channel remains unknown, this paper shows it is not far from its dirt-free counterpart. The insights from the dirty paper channel directly lead to transmission strategies for the two-user MIMO broadcast channel, where the transmitter emits a superposition of desired and undesired (dirt) signals with respect to each receiver. The performance of the lattice coding scheme is analyzed under different fading dynamics for the two users, showing that high-dimensional lattices achieve rates close to capacity. [ABSTRACT FROM PUBLISHER]
- Published
- 2017
- Full Text
- View/download PDF
8. Dirty-Paper Coding Based Secure Transmission for Multiuser Downlink in Cellular Communication Systems.
- Author
-
Wang, Bo and Mu, Pengcheng
- Subjects
- *
MULTIUSER channels , *LINEAR network coding , *WIRELESS communications , *BROADCAST channels , *COVARIANCE matrices , *PROBABILITY theory - Abstract
This paper studies the secure transmission in a multiuser broadcast channel where only the statistical channel state information of the eavesdropper is available. We propose to apply secret dirty-paper coding (S-DPC) in this scenario to support the secure transmission of one user and the normal (unclassified) transmission of the other users. By adopting the S-DPC and encoding the secret message in the first place, all the information-bearing signals of the normal transmission are treated as noise by potential eavesdroppers and thus provide secrecy for the secure transmission. In this way, the proposed approach exploits the intrinsic secrecy of multiuser broadcasting and can serve as an energy-efficient alternative to the traditional artificial noise (AN) scheme. To evaluate the secrecy performance of this approach and compare it with the AN scheme, we propose two S-DPC-based secure transmission schemes for maximizing the secrecy rate under constraints on the secrecy outage probability (SOP) and the normal transmission rates. The first scheme directly optimizes the covariance matrices of the transmit signals, and a novel approximation of the intractable SOP constraint is derived to facilitate the optimization. The second scheme combines zero-forcing dirty-paper coding and AN, and the optimization involves only power allocation. We establish efficient numerical algorithms to solve the optimization problems for both schemes. Theoretical and simulation results confirm that, in addition to supporting the normal transmission, the achievable secrecy rates of the proposed schemes can be close to that of the traditional AN scheme, which supports only the secure transmission of one user. [ABSTRACT FROM PUBLISHER]
- Published
- 2017
- Full Text
- View/download PDF
9. Dirty Paper Coding Based on Polar Codes and Probabilistic Shaping.
- Author
-
Sener, M. Yusuf, Bohnke, Ronald, Xu, Wen, and Kramer, Gerhard
- Abstract
A precoding technique based on polar codes and probabilistic shaping is introduced for dirty paper coding. Two variants of the precoding use multi-level shaping and sign-bit shaping in one dimension. The decoder uses multi-stage successive-cancellation list decoding with list-passing across the bit levels. The approach achieves approximately the same frame error rates as polar codes with multi-level shaping over standard additive white Gaussian noise channels at a block length of 256 symbols and with different amplitude shift keying (ASK) constellations. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
10. Secret Writing on Dirty Paper: A Deterministic View.
- Author
-
El-Halabi, Mustafa, Liu, Tie, Georghiades, Costas N., and Shamai, Shlomo
- Subjects
- *
CRYPTOGRAPHY , *INTERFERENCE channels (Telecommunications) , *CODING theory , *INFORMATION theory , *COMPUTER network security , *MATHEMATICAL models , *VECTOR analysis , *GAUSSIAN processes - Abstract
Recently, there has been a lot of success in using the deterministic approach to provide approximate characterization of Gaussian network capacity. In this paper, we take a deterministic view and revisit the problem of wiretap channel with side information. A precise characterization of the secrecy capacity is obtained for a linear deterministic model, which naturally suggests a coding scheme which we show to achieve the secrecy capacity of the degraded Gaussian model (dubbed as “secret writing on dirty paper”) to within half a bit. [ABSTRACT FROM PUBLISHER]
- Published
- 2012
- Full Text
- View/download PDF
11. A Review of Affective Computing Research Based on Function-Component-Representation Framework.
- Author
-
Ma, Haiwei and Yarosh, Svetlana
- Abstract
Affective computing (AC), a field that bridges the gap between human affect and computational technology, has witnessed remarkable technical advancement. However, theoretical underpinnings of affective computing are rarely discussed and reviewed. This paper provides a thorough conceptual analysis of the literature to understand theoretical questions essential to affective computing and current answers. Inspired by emotion theories, we proposed the function-component-representation (FCR) framework to organize different conceptions of affect along three dimensions that each address an important question: function of affect (why compute affect), component of affect (how to compute affect), and representation of affect (what affect to compute). We coded each paper by its underlying conception of affect and found preferences towards affect detection, behavioral component, and categorical representation. We also observed coupling of certain conceptions. For example, papers using the behavioral component tend to adopt the categorical representation, whereas papers using the physiological component tend to adopt the dimensional representation. The FCR framework is not only the first attempt to organize different theoretical perspectives in a systematic and quantitative way, but also a blueprint to help conceptualize an AC project and pinpoint new possibilities. Future work may explore how the identified frequencies of FCR framework combinations may be applied in practice. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
12. On the Dispersions of the Gel’fand–Pinsker Channel and Dirty Paper Coding.
- Author
-
Scarlett, Jonathan
- Subjects
- *
ERROR probability , *GAUSSIAN channels , *RANDOM noise theory , *DISPERSIVE channels (Telecommunication) , *CHANNEL coding - Abstract
This paper studies the second-order coding rates for memoryless channels with a state sequence known non-causally at the encoder. In the case of finite alphabets, an achievability result is obtained using constant-composition random coding, and by using a small fraction of the block to transmit the empirical distribution of the state sequence. For error probabilities less than 0.5, it is shown that the second-order rate improves on an existing one based on independent and identically distributed random coding. In the Gaussian case (dirty paper coding) with an almost-sure power constraint, an achievability result is obtained using random coding over the surface of a sphere, and using a small fraction of the block to transmit a quantized description of the state power. It is shown that the second-order asymptotics are identical to the single-user Gaussian channel of the same input power without a state. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
13. Design of Chipless RFID Tags Printed on Paper by Flexography.
- Author
-
Vena, Arnaud, Perret, Etienne, Tedjini, Smail, Eymin Petot Tourtollet, Guy, Delattre, Anastasia, Garet, Frederic, and Boutant, Yann
- Subjects
- *
RADIO frequency identification systems , *FLEXOGRAPHY , *ULTRA-wideband antennas , *PRINTING , *COMPUTER printers - Abstract
In this paper, we demonstrate for the first time that a 19-bit chipless tag based on a paper substrate can be realized using the flexography technique, which is an industrial high-speed printing process. The chipless tag is able to operate within the ultra-wide band (UWB) and has a reasonable size (7\times 3 cm^2) compared to state-of-the-art versions. Thus, it is possible to use this design for various identification applications that require a low unit cost of tags. Both the simulation and measurement results are shown, and performance comparisons are provided between several realization processes, such as classical chemical etching, flexography printing, and catalyst inkjet printing. [ABSTRACT FROM PUBLISHER]
- Published
- 2013
- Full Text
- View/download PDF
14. Channel Capacity Analysis for Dirty Paper Coding With the Binary Codeword and Interference.
- Author
-
Xu, Zhengguang and Xie, Yongbiao
- Abstract
Dirty paper coding is an interference pre-cancellation method for known interference at the transmitter and serves as a basic building block in the digital watermarking system. In this letter, we investigate the dirty paper model in the simplest digital communication system, where both the codeword and the interference are binary. For watermark embedment, we derive the relevant coding, the constant coding, and the symmetric relevant coding when the encoder focuses on the binary codeword and interference. The channel capacity is analyzed and the optimal parameter is discussed in the case. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
15. Dirty-Paper Coding for the Gaussian Multiaccess Channel With Conferencing.
- Author
-
Bross, Shraga I., Lapidoth, Amos, and Wigger, Michèle
- Subjects
- *
GAUSSIAN processes , *TRANSMITTERS (Communication) , *RANDOM noise theory , *CODE division multiple access , *CODING theory , *SIGNAL processing , *BIT error rate - Abstract
We derive the capacity region of the two-user dirty-paper Gaussian multiaccess channel (MAC) with conferencing encoders. In this MAC, prior to each transmission block, the transmitters can hold a conference in which they can communicate with each other over error-free bit pipes of given capacities. The received signal suffers not only from additive Gaussian noise but also from additive interference, which is known noncausally to the transmitters but not to the receiver. The additive interference is modeled as Gaussian or uniform over a sphere. We show that the interference can be perfectly mitigated, i.e., that the capacity region without interference can also be achieved in its presence. This holds irrespective of whether the transmitters learn the interference before or after the conference. It follows as a corollary that also for the MAC with degraded message sets, the interference can be perfectly mitigated if it is known noncausally to the transmitters. To derive our results, we generalize Costa's single-user writing-on-dirty-paper achievability result to channels with dependent interference and not-necessarily Gaussian noise. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
16. Writing on Fading Paper, Dirty Tape With Little Ink: Wideband Limits for Causal Transmitter CSI.
- Author
-
Borade, Shashi and Zheng, Lizhong
- Subjects
- *
TRANSMITTERS (Communication) , *BROADBAND communication systems , *INTERFERENCE channels (Telecommunications) , *INDUSTRIAL costs , *BANDWIDTHS - Abstract
A wideband Rayleigh fading channel is considered with causal channel state information (CSI) at the transmitter and no receiver CSI. A simple orthogonal code with energy detection rule at the receiver (similar to pulse position modulation in IEEE Trans. Inf. Theory, vol. 46, no. 4, Apr. 2000 and IEEE Trans. Inf. Theory, vol. 52 no. 5, May 2006) is shown to achieve the capacity of this channel in the wideband limit. This strategy transmits energy only when the channel gain exceeds a threshold, hence only needs causal transmitter CSI. In the wideband limit, this capacity without any receiver CSI is the same as the capacity with full receiver CSI, which is proportional to the logarithm of the bandwidth. Similar threshold-based pulse position modulation is shown to achieve the capacity per unit cost of the dirty-tape channel (dirty paper channel with causal transmitter CSI and no receiver CSI), which equals its capacity per unit cost with full receiver CSI. Then, a general discrete channel with i.i.d. states is considered. Each input has an associated cost and a zero cost input “0” exists. The channel state is assumed to be known at the transmitter in a causal manner. Capacity per unit cost is found for this channel and a simple orthogonal code is shown to achieve this capacity. Later, a novel orthogonal coding scheme is proposed for the case of causal transmitter CSI and a condition for equivalence of capacity per unit cost for causal and noncausal transmitter CSI is derived. [ABSTRACT FROM PUBLISHER]
- Published
- 2012
- Full Text
- View/download PDF
17. The Distortions Region of Broadcasting Correlated Gaussians and Asymmetric Data Transmission Over a Gaussian BC.
- Author
-
Bross, Shraga I.
- Subjects
DATA transmission systems ,DIGITAL communications ,GAUSSIAN channels ,BROADCAST channels ,ELECTRONIC paper ,VIDEO coding ,DIGITAL video broadcasting - Abstract
A memoryless bivariate Gaussian source is transmitted to a pair of receivers over an average-power limited bandwidth-matched Gaussian broadcast channel. Based on their observations, Receiver 1 reconstructs the first source component while Receiver 2 reconstructs the second source component both seeking to minimize the expected squared-error distortions. In addition to the source transmission digital information at a specified rate should be conveyed reliably to Receiver 1–the “stronger” receiver. Given the message rate we characterize the achievable distortions region. Specifically, there is an ${\sf SNR}$ -threshold below which Dirty Paper coding of the digital information against a linear combination of the source components is optimal. The threshold is a function of the digital information rate, the source correlation and the distortion at the “stronger” receiver. Above this threshold a Dirty Paper coding extension of the Tian-Diggavi-Shamai hybrid scheme is shown to be optimal. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
18. Light drive reversible color switching for rewritable media and encoding.
- Author
-
Ren, Qiaoli, Aodeng, Gerile, Ga, Lu, and Ai, Jun
- Subjects
- *
ELECTRONIC paper , *REDUCING agents , *CATALYSTS , *ENCODING , *INDUSTRIAL costs , *COLOR - Abstract
[Display omitted] • Photoreversible color switching systems have been integrated by reducing agent of triethanolamine, catalyst of β-FeOOH nanorods. • With high switching rate, high reversibility (>10 cycle), the new system could be broad use in rewritable paper. • The rewritable paper is highly applicable as self-erasing rewritable media for printing. • The rewritable paper can be applied for a data encoding and reading strategy. Nowadays, photo reversible color switching systems (PCSS) are always limited by some requirements, such as good stability, low toxicity, fast light response, long cycling performance and low production cost, therefore, it is a huge challenge to develop such a system that integrates beneficial features. Herein, a new type of PCSS have been demonstrated, which integrated reducing agent of triethanolamine (TEOA), catalyst of β-FeOOH nanorods and the redox driven color conversion characteristics of redox dyes. The system has the advantages of high switching rate, high reversibility (>10 cycles), wavelength selective response, safety and less light damage, which can be widely used in rewritable paper. As-prepared rewritable paper has high contrast, high resolution, suitable printing time and good reversibility, which is in line with the environmental protection concept of green printing. Rewritable paper is a kind of self-erasable rewritable medium which is highly suitable for printing. Environmental protection film has the advantages of low cost, convenient preparation and recycling. It is expected to replace the traditional writing, printing paper, existing systems and make a big step forward to practical application. Even more surprising, it also can be applied for a data encoding and reading strategy. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
19. Near-Field Chipless-RFID System With Erasable/Programmable 40-bit Tags Inkjet Printed on Paper Substrates.
- Author
-
Herrojo, Cristian, Mata-Contreras, Javier, Paredes, Ferran, Nunez, Alba, Ramon, Eloi, and Martin, Ferran
- Abstract
In this letter, a chipless radio frequency identification (chipless-RFID) system with erasable/programmable 40-bit tags inkjet printed on paper substrates, where tag reading proceeds sequentially through near-field coupling, is presented for the first time. The tags consist of a linear chain of identical split ring resonators (SRRs) printed at predefined and equidistant positions on a paper substrate, and each resonant element provides a bit of information. Tag programming is achieved by cutting certain resonant elements, providing the logic state “0” to the corresponding bit. Conversely, tags can be erased (all bits set to “1”) by short circuiting those previously cut resonant elements through inkjet. An important feature of the proposed system is the fact that tag reading is possible either with the SRR chain faced up or faced down (with regard to the reader). To this end, two pairs of header bits (resonators), with different sequences, have been added at the beginning and at the end of the tag identification chain. Moreover, tag data storage capacity (number of bits) is only limited by the space occupied by the linear chain. The implementation of tags on paper substrates demonstrates the potential of the proposed chipless-RFID system in secure paper applications, where the necessary proximity between the reader and the tag, inherent to near-field reading, is not an issue. [ABSTRACT FROM PUBLISHER]
- Published
- 2018
- Full Text
- View/download PDF
20. Enhancing LS-PIE's Optimal Latent Dimensional Identification: Latent Expansion and Latent Condensation.
- Author
-
Stevens, Jesse, Wilke, Daniel N., and Setshedi, Isaac I.
- Subjects
SINGULAR value decomposition ,COMPACT spaces (Topology) ,LATENT variables ,PRINCIPAL components analysis ,CONDENSATION - Abstract
The Latent Space Perspicacity and Interpretation Enhancement (LS-PIE) framework enhances dimensionality reduction methods for linear latent variable models (LVMs). This paper extends LS-PIE by introducing an optimal latent discovery strategy to automate identifying optimal latent dimensions and projections based on user-defined metrics. The latent condensing (LCON) method clusters and condenses an extensive latent space into a compact form. A new approach, latent expansion (LEXP), incrementally increases latent dimensions using a linear LVM to find an optimal compact space. This study compares these methods across multiple datasets, including a simple toy problem, mixed signals, ECG data, and simulated vibrational data. LEXP can accelerate the discovery of optimal latent spaces and may yield different compact spaces from LCON, depending on the LVM. This paper highlights the LS-PIE algorithm's applications and compares LCON and LEXP in organising, ranking, and scoring latent components akin to principal component analysis or singular value decomposition. This paper shows clear improvements in the interpretability of the resulting latent representations allowing for clearer and more focused analysis. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
21. HME-KG: A method of constructing the human motion encoding knowledge graph based on a hierarchical motion model.
- Author
-
Liu, Qi, Huang, Tianyu, and Li, Xiangchen
- Subjects
KNOWLEDGE graphs ,MOTION capture (Human mechanics) ,POSTURE ,ENCODING ,VISUALIZATION - Abstract
The diversity, infinity, and nonuniform description of human motion make it challenging for computers to understand human activities. To explore and reuse captured human motion data, this work defines a more comprehensive hierarchical theoretical model of human motion and proposes a standard human posture encoding scheme. We construct a domain knowledge graph (DKG) named the human motion encoding knowledge graph (HME-KG) based on posture codes and action labels. Community detection, similarity analysis, and centrality analysis are used to explore the potential value of motion data. This paper conducts an evaluation and visualization of HME-KG. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
22. Han–Kobayashi and Dirty-Paper Coding for Superchannel Optical Communications.
- Author
-
Koike-Akino, Toshiaki, Kojima, Keisuke, Millar, David S., Parsons, Kieran, Kametani, Soichiro, Sugihara, Takashi, Yoshida, Tsuyoshi, Ishida, Kazuyuki, Miyata, Yoshikuni, Matsumoto, Wataru, and Mizuochi, Takashi
- Abstract
Superchannel transmission is a candidate to realize Tb/s-class high-speed optical communications. In order to achieve higher spectrum efficiency, the channel spacing shall be as narrow as possible. However, densely allocated channels can cause non-negligible inter-channel interference (ICI) especially when the channel spacing is close to or below the Nyquist bandwidth. In this paper, we consider joint decoding to cancel the ICI in dense superchannel transmission. To further improve the spectrum efficiency, we propose the use of Han–Kobayashi superposition coding. In addition, for the case when neighboring subchannel transmitters can share data, we introduce dirty-paper coding for pre-cancelation of the ICI. We analytically evaluate the potential gains of these methods when ICI is present for sub-Nyquist channel spacing. [ABSTRACT FROM PUBLISHER]
- Published
- 2015
- Full Text
- View/download PDF
23. Generalization and Analysis of the Paper Folding Method for Steganography.
- Author
-
Zhang, Weiming, Liu, Jiufen, Wang, Xin, and Yu, Nenghai
- Abstract
Wet paper codes (WPCs) are designed for steganography, in which the sender and recipient do not need to share the changeable positions. In this paper, we propose the N-page construction for wet paper coding, which can generate a family of WPCs following the upper bound on embedding efficiency from one single WPC. The Paper Folding method, one of our previous methods, is a special case of the N-page construction with N=2^k. We deduce recursions for calculating embedding efficiency of N-page construction, and obtain explicit expression on embedding efficiency of 2^k-page construction. Furthermore, we derive the limit of distance between the embedding efficiency of 2^k-page construction and the upper bound of embedding efficiency as k tends to infinity. Based on the limit, we analyze how the embedding efficiency is influenced by the proportion of wet pixels (wet ratio) in the cover, showing that embedding efficiency only drops about 0.32 as the wet ratio increases to 0.9999. [ABSTRACT FROM PUBLISHER]
- Published
- 2010
- Full Text
- View/download PDF
24. Specific Emitter Identification Algorithm Based on Time–Frequency Sequence Multimodal Feature Fusion Network.
- Author
-
He, Yuxuan, Wang, Kunda, Song, Qicheng, Li, Huixin, and Zhang, Bozhi
- Subjects
RADAR signal processing ,RADAR ,ALGORITHMS ,ENCODING ,SIGNALS & signaling - Abstract
Specific emitter identification is a challenge in the field of radar signal processing. Its aims to extract individual fingerprint features of the signal. However, early works are all designed using either signal or time–frequency image and heavily rely on the calculation of hand-crafted features or complex interactions in high-dimensional feature space. This paper introduces the time–frequency multimodal feature fusion network, a novel architecture based on multimodal feature interaction. Specifically, we designed a time–frequency signal feature encoding module, a wvd image feature encoding module, and a multimodal feature fusion module. Additionally, we propose a feature point filtering mechanism named FMM for signal embedding. Our algorithm demonstrates high performance in comparison with the state-of-the-art mainstream identification methods. The results indicate that our algorithm outperforms others, achieving the highest accuracy, precision, recall, and F1-score, surpassing the second-best by 9.3%, 8.2%, 9.2%, and 9%. Notably, the visual results show that the proposed method aligns with the signal generation mechanism, effectively capturing the distinctive fingerprint features of radar data. This paper establishes a foundational architecture for the subsequent multimodal research in SEI tasks. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
25. Orthogonal Matrix-Autoencoder-Based Encoding Method for Unordered Multi-Categorical Variables with Application to Neural Network Target Prediction Problems.
- Author
-
Wang, Yiying, Li, Jinghua, Yang, Boxin, Song, Dening, and Zhou, Lei
- Subjects
ARTIFICIAL neural networks ,BAYESIAN analysis ,ELECTRONIC data processing ,PROBLEM solving ,ENCODING - Abstract
Neural network models, such as BP, LSTM, etc., support only numerical inputs, so data preprocessing needs to be carried out on the categorical variables to convert them into numerical data. For unordered multi-categorical variables, existing encoding methods may produce dimensional catastrophes and may also introduce additional order misrepresentation and distance bias in neural network computation. To solve the above problems, this paper proposes an unordered multi-categorical variable encoding method O-AE using orthogonal matrix for encoding and encoding representation learning and dimensionality reduction via an autoencoder. Bayesian optimization is used for hyperparameter optimization of the autoencoder. Finally, seven experiments were designed with the basic O-AE, Bayesian optimization of the hyperparameters of the autoencoder for O-AE, and other encoding methods to encode unordered multi-categorical variables in five datasets, and they were input into a BP neural network to carry out target prediction experiments. The results show that the experiments using O-AE and O-AE-b have better prediction results, proving that the method proposed in this paper is highly feasible and applicable and can be an optional method for the data processing of unordered multi-categorical variables. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
26. FIBTNet: Building Change Detection for Remote Sensing Images Using Feature Interactive Bi-Temporal Network.
- Author
-
Wang, Jing, Lin, Tianwen, Zhang, Chen, and Peng, Jun
- Subjects
REMOTE sensing ,PROBLEM solving ,DECISION making ,ENCODING - Abstract
In this paper, a feature interactive bi-temporal change detection network (FIBTNet) is designed to solve the problem of pseudo change in remote sensing image building change detection. The network improves the accuracy of change detection through bi-temporal feature interaction. FIBTNet designs a bi-temporal feature exchange architecture (EXA) and a bi-temporal difference extraction architecture (DFA). EXA improves the feature exchange ability of the model encoding process through multiple space, channel or hybrid feature exchange methods, while DFA uses the change residual (CR) module to improve the ability of the model decoding process to extract different features at multiple scales. Additionally, at the junction of encoder and decoder, channel exchange is combined with the CR module to achieve an adaptive channel exchange, which further improves the decision-making performance of model feature fusion. Experimental results on the LEVIR-CD and S2Looking datasets demonstrate that iCDNet achieves superior F1 scores, Intersection over Union (IoU), and Recall compared to mainstream building change detection models, confirming its effectiveness and superiority in the field of remote sensing image change detection. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
27. Dirty Paper Coding for Gaussian Cognitive Z-Interference Channel: Performance Results.
- Author
-
Al-qudah, Zouhair and Rajan, Dinesh
- Abstract
In this paper, we present a practical application of dirty paper coding (DPC) for the Gaussian cognitive Z-interference channel. A two stage transmission scheme is proposed in which the cognitive transmitter first obtains the interference signal from the primary transmitter and then uses DPC to improve the performance of the cognitive link. Numerical results show that causal knowledge of the interference provides more than 3 dB improvement in performance in certain scenarios over a scheme that does not use interference cancellation. Results are also shown when the cognitive transmitter operates in both half-duplex and full-duplex modes. [ABSTRACT FROM PUBLISHER]
- Published
- 2013
- Full Text
- View/download PDF
28. 48‐1: Invited Paper: Holographic Display Based on Complex‐Amplitude Encoding with Phase‐Only SLMs.
- Author
-
Sui, Xiaomeng, Cao, Liangcai, and Jin, Guofan
- Subjects
HOLOGRAPHIC displays ,HOLOGRAPHY ,LIGHT filters ,IMAGE reconstruction ,DIGITAL holographic microscopy ,ENCODING - Abstract
Double‐phase holograms enable the holographic reconstructions with improved image quality but still suffer from the spatial shifting noises generated from the complex‐amplitude wavefront encoding. The band‐limited double‐phase method could suppress the spatial shifting noise by the band limitation. A multi‐plane complex‐amplitude holographic display is implemented based on band‐limited double‐phase hologram. High‐sharpness reconstructions free of spatial‐shifting noise are realized with the numerical band limitation and optical filtering. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
29. Capacity of a Class of Multicast Tree Networks.
- Author
-
Lee, Si-Hyeon and Chung, Sae-Young
- Subjects
- *
TOPOLOGY , *PAPER arts , *COMPUTER networks , *MEMORYLESS systems , *CHANNEL coding - Abstract
In this paper, we characterize the capacity of a new class of discrete memoryless multicast networks having a tree topology. For achievability, a novel coding scheme is constructed where some relays employ a combination of decode-and-forward and compress-and-forward and the other relays perform a random binning such that codebook constructions and relay operations are independent for each node and do not depend on the network topology. For converse, a new technique of iteratively manipulating inequalities exploiting the tree topology is used. This class of multicast tree networks includes the class of diamond networks studied by Kang and Ulukus as a special case. [ABSTRACT FROM PUBLISHER]
- Published
- 2013
- Full Text
- View/download PDF
30. Localized Error Correction in Projective Space.
- Author
-
Cai, Ning
- Subjects
- *
PAPER arts , *ERRORS , *CODING theory , *DIMENSIONS - Abstract
In this paper, we extend the localized error correction code introduced by L. A. Bassalygo and coworkers from Hamming space to projective space. For constant dimensional localized error correction codes in projective space, we have a lower bound and an upper bound of the capacity, which are asymptotically tight when z< x\leq n-z\over 2, where x, z, and n are dimensions of codewords, error configurations, and the ground space, respectively. We determine the capacity of nonconstant dimensional localized error correction codes when z < n\over 3. [ABSTRACT FROM PUBLISHER]
- Published
- 2013
- Full Text
- View/download PDF
31. On the Multiple-Access Channel With Common Rate-Limited Feedback.
- Author
-
Shaviv, Dor and Steinberg, Yossef
- Subjects
- *
ENCODING , *PAPER arts , *MATHEMATICS , *MARKOV processes , *GAUSSIAN channels - Abstract
This paper studies the multiple-access channel (MAC) with rate-limited feedback. The channel output is encoded into one stream of bits, which is provided causally to the two users at the channel input. An achievable rate region for this setup is derived, based on superposition of information, block Markov coding, and coding with various degrees of side information for the feedback link. The suggested region coincides with the Cover–Leung inner bound for large feedback rates. The result is then extended for cases where there is only a feedback link to one of the transmitters, and for a more general case where there are two separate feedback links to both transmitters. We compute achievable regions for the Gaussian MAC and for the binary erasure MAC. The Gaussian region is computed for the case of common rate-limited feedback, whereas the region for the binary erasure MAC is computed for one-sided feedback. It is known that for the latter, the Cover–Leung region is tight, and we obtain results that coincide with the feedback capacity region for high feedback rates. [ABSTRACT FROM PUBLISHER]
- Published
- 2013
- Full Text
- View/download PDF
32. Coding With Noiseless Feedback Over the Z-Channel.
- Author
-
Deppe, Christian, Lebedev, Vladimir, Maringer, Georg, and Polyanskii, Nikita
- Subjects
ERROR-correcting codes ,BOUND states ,PARALLEL algorithms - Abstract
In this paper, we consider encoding strategies for the Z-channel with noiseless feedback. We analyze the combinatorial setting where the maximum number of errors inflicted by an adversary is proportional to the number of transmissions, which goes to infinity. Without feedback, it is known that the rate of optimal asymmetric-error-correcting codes for the error fraction $\tau \ge 1/4$ vanishes as the blocklength grows. In this paper, we give an efficient feedback encoding scheme with $n$ transmissions that achieves a positive rate for any fraction of errors $\tau < 1$ and $n\to \infty $. Additionally, we state an upper bound on the rate of asymptotically long feedback asymmetric error-correcting codes. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
33. A Density Evolution Based Framework for Dirty-Paper Code Design Using TCQ and Multilevel LDPC Codes.
- Author
-
Yang, Xiong, Zixiang, Yu-chun, Wu, and Zhang, Philipp
- Abstract
We propose a density evolution based dirty-paper code design framework that combines trellis coded quantization with multi-level low-density parity-check (LDPC) codes. Unlike existing design techniques based on Gaussian approximation and EXIT charts, the proposed framework tracks the empirically collected log-likelihood ratio (LLR) distributions at each iteration, and employs density evolution and differential evolution algorithms to design each LDPC component code. The performance of the dirty-paper codes designed using the proposed method comes within 0.37 dB of the theoretical limit at 1 bit per sample transmission rate, achieving an 0.21 dB gain over the best known result. [ABSTRACT FROM PUBLISHER]
- Published
- 2012
- Full Text
- View/download PDF
34. Long Short-Term Memory-Based Non-Uniform Coding Transmission Strategy for a 360-Degree Video.
- Author
-
Guo, Jia, Li, Chengrui, Zhu, Jinqi, Li, Xiang, Gao, Qian, Chen, Yunhe, and Feng, Weijia
- Subjects
PREDICTION models ,TILES ,VIDEOS ,ALGORITHMS ,VIDEO coding ,ENCODING - Abstract
This paper studies an LSTM-based adaptive transmission method for a 360-degree video and proposes a non-uniform encoding transmission strategy based on LSTM. Our goal is to maximize the user's video experience by dynamically dividing the 360-degree video into tiles of different numbers and sizes, and selecting different bitrates for each tile. This aims to reduce buffering events and video jitter. To determine the optimal number and size of tiles at the current moment, we constructed a dual-layer stacked LSTM network model. This model predicts, in real-time, the number, size, and bitrate of the tiles needed for the next moment of the 360-degree video based on the distance between the user's eyes and the screen. In our experiments, we used an exhaustive algorithm to calculate the optimal tile division and bitrate selection scheme for a 360-degree video under different network conditions, and used this dataset to train our prediction model. Finally, by comparing with other advanced algorithms, we demonstrated the superiority of our proposed method. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
35. Fd-CasBGRel: A Joint Entity–Relationship Extraction Model for Aquatic Disease Domains.
- Author
-
Ye, Hongbao, Lv, Lijian, Zhou, Chengquan, and Sun, Dawei
- Subjects
KNOWLEDGE graphs ,CORPORA ,WEBSITES ,GENERALIZATION ,ENCODING - Abstract
Featured Application: The model is primarily utilized for the task of entity relationship extraction during the construction process of an aquatic disease knowledge graph. Entity–relationship extraction plays a pivotal role in the construction of domain knowledge graphs. For the aquatic disease domain, however, this relationship extraction is a formidable task because of overlapping relationships, data specialization, limited feature fusion, and imbalanced data samples, which significantly weaken the extraction's performance. To tackle these challenges, this study leverages published books and aquatic disease websites as data sources to compile a text corpus, establish datasets, and then propose the Fd-CasBGRel model specifically tailored to the aquatic disease domain. The model uses the Casrel cascading binary tagging framework to address relationship overlap; utilizes task fine-tuning for better performance on aquatic disease data; trains on specialized aquatic disease corpora to improve adaptability; and integrates the BRC feature fusion module—which incorporates self-attention mechanisms, BiLSTM, relative position encoding, and conditional layer normalization—to leverage entity position and context for enhanced fusion. Further, it replaces the traditional cross-entropy loss function with the GHM loss function to mitigate category imbalance issues. The experimental results indicate that the F1 score of the Fd-CasBGRel on the aquatic disease dataset reached 84.71%, significantly outperforming several benchmark models. This model effectively addresses the challenges of ternary extraction's low performance caused by high data specialization, insufficient feature integration, and data imbalances. The model achieved the highest F1 score of 86.52% on the overlapping relationship category dataset, demonstrating its robust capability in extracting overlapping data. Furthermore, We also conducted comparative experiments on the publicly available dataset WebNLG, and the model in this paper obtained the best performance metrics compared to the rest of the comparative models, indicating that the model has good generalization ability. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
36. A cognitive modelling of translation: A construal-based perspective.
- Author
-
Tao, Mei
- Subjects
TRANSLATORS ,TRANSLATING & interpreting ,IMAGINATION ,SUBJECTIVITY ,ENCODING - Abstract
This paper attempts to usher in a cognitive linguistic theory into (cognitive) translation studies and offers a theoretical model of translation from the construal perspective. In this construal-based theory, translators are modelled as construers which are featured by subjectivity, and meaning decoding and encoding in translation are equated with construal which is manifested at two levels. At the cognitive level, translators construe the source text whereas at the linguistic level the construal established by the translators is packaged in the target language. Translation process involves translators' construal operations such as perspective, selection, prominence and dynamicity, and imagination. Since construal is situated in context, it is impossible to recreate the construals of the source text author but optimal construal can be envisioned if the situated contexts for the source text author(s) as well as that for the target text are best accommodated. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
37. Binding in Najdi Arabic: Types of Reflexives, the Argument Structure of Reflexive Constructions and Possessive Reflexives.
- Author
-
Alowayed, Asma I. and Albaty, Yasser A.
- Subjects
ARGUMENT ,REFLEXIVITY ,ENCODING ,SYNTAX (Grammar) - Abstract
The present paper investigates reflexives in Najdi Arabic (NA). We start by examining how the encoding of reflexivity in NA can be attained lexically, morphologically, and syntactically. We also investigate the argument structure of reflexive constructions in NA in accordance with Reinhart and Siloni’s (2005) bundling approach. Finally, possessive reflexives and their cross-linguistic distribution with definiteness marking are examined, providing empirical coverage to this area in NA. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
38. DNA encoding schemes herald a new age in cybersecurity for safeguarding digital assets.
- Author
-
Aqeel, Sehrish, Khan, Sajid Ullah, Khan, Adnan Shahid, Alharbi, Meshal, Shah, Sajid, Affendi, Mohammed EL, and Ahmad, Naveed
- Subjects
ARTIFICIAL chromosomes ,DNA ,INTERNET security ,ENCODING ,ASSETS (Accounting) - Abstract
With the urge to secure and protect digital assets, there is a need to emphasize the immediacy of taking measures to ensure robust security due to the enhancement of cyber security. Different advanced methods, like encryption schemes, are vulnerable to putting constraints on attacks. To encode the digital data and utilize the unique properties of DNA, like stability and durability, synthetic DNA sequences are offered as a promising alternative by DNA encoding schemes. This study enlightens the exploration of DNA's potential for encoding in evolving cyber security. Based on the systematic literature review, this paper provides a discussion on the challenges, pros, and directions for future work. We analyzed the current trends and new innovations in methodology, security attacks, the implementation of tools, and different metrics to measure. Various tools, such as Mathematica, MATLAB, NIST test suite, and Coludsim, were employed to evaluate the performance of the proposed method and obtain results. By identifying the strengths and limitations of proposed methods, the study highlights research challenges and offers future scope for investigation. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
39. A clinical trial termination prediction model based on denoising autoencoder and deep survival regression.
- Author
-
Qi, Huamei, Yang, Wenhui, Zou, Wenqin, and Hu, Yuxuan
- Subjects
SIGNAL denoising ,PREDICTION models ,REGRESSION analysis ,ENCODING ,PREGNANT women - Abstract
Effective clinical trials are necessary for understanding medical advances but early termination of trials can result in unnecessary waste of resources. Survival models can be used to predict survival probabilities in such trials. However, survival data from clinical trials are sparse, and DeepSurv cannot accurately capture their effective features, making the models weak in generalization and decreasing their prediction accuracy. In this paper, we propose a survival prediction model for clinical trial completion based on the combination of denoising autoencoder (DAE) and DeepSurv models. The DAE is used to obtain a robust representation of features by breaking the loop of raw features after autoencoder training, and then the robust features are provided to DeepSurv as input for training. The clinical trial dataset for training the model was obtained from the ClinicalTrials.gov dataset. A study of clinical trial completion in pregnant women was conducted in response to the fact that many current clinical trials exclude pregnant women. The experimental results showed that the denoising autoencoder and deep survival regression (DAE‐DSR) model was able to extract meaningful and robust features for survival analysis; the C‐index of the training and test datasets were 0.74 and 0.75 respectively. Compared with the Cox proportional hazards model and DeepSurv model, the survival analysis curves obtained by using DAE‐DSR model had more prominent features, and the model was more robust and performed better in actual prediction. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
40. Beamformer Designs for MISO Broadcast Channels with Zero-Forcing Dirty Paper Coding.
- Author
-
Tran, Le-Nam, Juntti, Markku, Bengtsson, Mats, and Ottersten, Bjorn
- Abstract
We consider the beamformer design for multiple-input multiple-output (MISO) broadcast channels (MISO BCs) using zero-forcing dirty paper coding (ZF-DPC). Assuming a sum power constraint (SPC), most previously proposed beamformer designs are based on the QR decomposition (QRD), which is a natural choice to satisfy the ZF constraints. However, the optimality of the QRD-based design for ZF-DPC has remained unknown. In this paper, first, we analytically establish that the QRD-based design is indeed optimal for any performance measure under a SPC. Then, we propose an optimal beamformer design method for ZF-DPC with per-antenna power constraints (PAPCs), using a convex optimization framework. The beamformer design is first formulated as a rank-1-constrained optimization problem. Exploiting the special structure of the ZF-DPC scheme, we prove that the rank constraint can be relaxed and still provide the same solution. In addition, we propose a fast converging algorithm to the beamformer design problem, under the duality framework between the BCs and multiple access channels (MACs). More specifically, we show that a BC with ZF-DPC has the dual MAC with ZF-based successive interference cancellation (ZF-SIC). In this way, the beamformer design for ZF-DPC is transformed into a power allocation problem for ZF-SIC, which can be solved more efficiently. [ABSTRACT FROM PUBLISHER]
- Published
- 2013
- Full Text
- View/download PDF
41. A multi-scale residual encoding network for concrete crack segmentation.
- Author
-
Liu, Die, Xu, MengDie, Li, ZhiTing, He, Yingying, Zheng, Long, Xue, Pengpeng, and Wu, Xiaodong
- Subjects
CRACKING of concrete ,LINEAR network coding ,SURFACE cracks ,ENCODING - Abstract
Concrete surface crack detection plays a crucial role in ensuring concrete safety. However, manual crack detection is time-consuming, necessitating the development of an automatic method to streamline the process. Nonetheless, detecting concrete cracks automatically remains challenging due to the heterogeneous strength of cracks and the complex background. To address this issue, we propose a multi-scale residual encoding network for concrete crack segmentation. This network leverages the U-NET basic network structure to merge feature maps from different levels into low-level features, thus enhancing the utilization of predicted feature maps. The primary contribution of this research is the enhancement of the U-NET coding network through the incorporation of a residual structure. This modification improves the coding network's ability to extract features related to small cracks. Furthermore, an attention mechanism is utilized within the network to enhance the perceptual field information of the crack feature map. The integration of this mechanism enhances the accuracy of crack detection across various scales. Furthermore, we introduce a specially designed loss function tailored to crack datasets to tackle the problem of imbalanced positive and negative samples in concrete crack images caused by data imbalance. This loss function helps improve the prediction accuracy of crack pixels. To demonstrate the superiority and universality of our proposed method, we conducted a comparative evaluation against state-of-the-art edge detection and semantic segmentation methods using a standardized evaluation approach. Experimental results on the SDNET2018 dataset demonstrate the effectiveness of our method, achieving mIOU, F1-score, Precision, and Recall scores of 0.862, 0.941, 0.945, and 0.9394, respectively. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
42. A Multi-Modal Entity Alignment Method with Inter-Modal Enhancement.
- Author
-
Yuan, Song, Lu, Zexin, Li, Qiyuan, and Gu, Jinguang
- Subjects
ENCODING - Abstract
Due to inter-modal effects hidden in multi-modalities and the impact of weak modalities on multi-modal entity alignment, a Multi-modal Entity Alignment Method with Inter-modal Enhancement (MEAIE) is proposed. This method introduces a unique modality called numerical modality in the modal aspect and applies a numerical feature encoder to encode it. In the feature embedding stage, this paper utilizes visual features to enhance entity relation representation and influence entity attribute weight distribution. Then, this paper introduces attention layers and contrastive learning to strengthen inter-modal effects and mitigate the impact of weak modalities. In order to evaluate the performance of the proposed method, experiments are conducted on three public datasets: FB15K, DB15K, and YG15K. By combining the datasets in pairs, compared with the current state-of-the-art multi-modal entity alignment models, the proposed model achieves a 2% and 3% improvement in Top-1 Hit Rate(Hit@1) and Mean Reciprocal Rank (MRR), demonstrating its feasibility and effectiveness. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
43. Dictionary form in decoding, encoding and retention: Further insights.
- Author
-
DZIEMIANKO, ANNA
- Subjects
ELECTRONIC dictionaries ,FOREIGN language education ,PHONOLOGICAL encoding ,PHONOLOGICAL decoding ,ENCYCLOPEDIAS & dictionaries ,COLLOCATION (Linguistics) - Abstract
The aim of the paper is to investigate the role of dictionary form (paper versus electronic) in language reception, production and retention. The body of existing research does not give a clear answer as to which dictionary medium benefits users more. Divergent findings from many studies into the topic might stem from differences in research methodology (including the various tasks, participants and dictionaries used by different authors). Even a series of studies conducted by one researcher (Dziemianko, 2010, 2011, 2012b) leads to contradictory conclusions, possibly because of the use of paper and electronic versions of existing dictionaries, and the resulting problem with isolating dictionary form as a factor. To be able to argue with confidence that the results obtained follow from different dictionary formats, rather than presentation issues, research methodology should be improved. To successfully generalize about the significance of the medium for decoding, encoding and learning, the current study replicates previous research, but the presentation of lexicographic data on paper and on screen is now balanced, and the paper/electronic opposition is operationalized more appropriately. A real online dictionary and its paper-based counterpart composed of printouts of screen displays were used in the experiment in which the meaning of English nouns and phrases was explained, and collocations were completed with missing prepositions. A delayed post-test checked the retention of the meanings and collocations. The results indicate that dictionary medium does not play a statistically significant role in reception and production, but it considerably affects retention. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
44. Practical Dirty Paper Coding Schemes Using One Error Correction Code With Syndrome.
- Author
-
Kim, Taehyun, Kwon, Kyunghoon, and Heo, Jun
- Abstract
Dirty paper coding (DPC) offers an information-theoretic result for pre-cancellation of known interference at the transmitter. In this letter, we propose practical DPC schemes that use only one error correction code. Our designs focus on practical use from the viewpoint of complexity. For fair comparison with previous schemes, we compute the complexity of proposed schemes by the number of operations used. Simulation results show that compared to previous DPC schemes, the proposed schemes require lower transmission power to maintain the bit error rate to be within 10^-5 . [ABSTRACT FROM PUBLISHER]
- Published
- 2017
- Full Text
- View/download PDF
45. Hill Matrix and Radix-64 Bit Algorithm to Preserve Data Confidentiality.
- Author
-
Arshad, Ali, Nadeem, Muhammad, Riaz, Saman, Zahra, Syeda Wajiha, Dutta, Ashit Kumar, Alzaid, Zaid, Alabdan, Rana, Almutairi, Badr, and Almotairi, Sultan
- Subjects
DATA encryption ,DATA security ,DATA protection ,ALGORITHMS ,CONFIDENTIAL communications - Abstract
There are many cloud data security techniques and algorithms available that can be used to detect attacks on cloud data, but these techniques and algorithms cannot be used to protect data from an attacker. Cloud cryptography is the best way to transmit data in a secure and reliable format. Various researchers have developed various mechanisms to transfer data securely, which can convert data from readable to unreadable, but these algorithms are not sufficient to provide complete data security. Each algorithm has some data security issues. If some effective data protection techniques are used, the attacker will not be able to decipher the encrypted data, and even if the attacker tries to tamper with the data, the attacker will not have access to the original data. In this paper, various data security techniques are developed, which can be used to protect the data from attackers completely. First, a customized American Standard Code for Information Interchange (ASCII) table is developed. The value of each Index is defined in a customized ASCII table. When an attacker tries to decrypt the data, the attacker always tries to apply the predefined ASCII table on the Ciphertext, which in a way, can be helpful for the attacker to decrypt the data. After that, a radix 64-bit encryption mechanism is used, with the help of which the number of cipher data is doubled from the original data. When the number of cipher values is double the original data, the attacker tries to decrypt each value. Instead of getting the original data, the attacker gets such data that has no relation to the original data. After that, a Hill Matrix algorithm is created, with the help of which a key is generated that is used in the exact plain text for which it is created, and this Key cannot be used in any other plain text. The boundaries of each Hill text work up to that text. The techniques used in this paper are compared with those used in various papers and discussed that how far the current algorithm is better than all other algorithms. Then, the Kasiski test is used to verify the validity of the proposed algorithm and found that, if the proposed algorithm is used for data encryption, so an attacker cannot break the proposed algorithm security using any technique or algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
46. Investigation of MARC use in Iranian academic libraries
- Author
-
Ghaebi, Amir, Shamsbod, Mahmood, and Karimi‐Mansoorabad, Elham
- Published
- 2010
- Full Text
- View/download PDF
47. Cloud media video encoding: review and challenges
- Author
-
Moina-Rivera, Wilmer, Garcia-Pineda, Miguel, Gutiérrez-Aguado, Juan, and Alcaraz-Calero, Jose M.
- Published
- 2024
- Full Text
- View/download PDF
48. Why One and Two Do Not Make Three: Dictionary Form Revisited
- Author
-
Anna Dziemianko
- Subjects
paper dictionaries ,electronic dictionaries ,dictionary use ,encoding ,decoding ,retention ,research methods ,replication ,menus ,highlighting ,noise ,access ,entry length ,Philology. Linguistics ,P1-1091 ,Languages and literature of Eastern Asia, Africa, Oceania ,PL1-8844 ,Germanic languages. Scandinavian languages ,PD1-7159 - Abstract
The primary aim of the article is to compare the usefulness of paper and electronic versions of OALDCE7 (Wehmeier 2005) for language encoding, decoding and learning. It is explained why, in contrast to Dziemianko's (2010) findings concerning COBUILD6 (Sinclair 2008), but in keeping with her observations (Dziemianko 2011) with regard to LDOCE5 (Mayor 2009), the e-version of OALDCE7 proved to be no better for language reception, production and learning than the dictionary in book form. An attempt is made to pinpoint the micro- and macrostructural design features which make e-COBUILD6 a better learning tool than e-OALDCE7 and e-LDOCE5. Recommendations concerning further research into the significance of the medium (paper vs. electronic) in the process of dictionary use conclude the study. The secondary aim which the paper attempts to achieve is to present the status of replication as a scientific research method and justify its use in lexicography.
- Published
- 2012
- Full Text
- View/download PDF
49. Performance Analysis of New 2D Spatial OCDMA Encoding based on HG Modes in Multicore Fiber.
- Author
-
Sahraoui, Walid, Amphawan, Angela, Jasser, Muhammed Basheer, and Tse-Kian Neo
- Subjects
CODE division multiple access ,CROSS correlation ,ENCODING ,VIDEO coding - Abstract
This paper presents a pioneering 2D spatial Optical Code-Division Multiple Access (OCDMA) encoding system that exploits Mode Division Multiplexing (MDM) and Multicore Fiber (MCF) technologies. This innovative approach utilizes two spatial dimensions to enhance the performance and security of OCDMA systems. In the first dimension, we employ Hermite-Gaussian modes (HG00, HG01, HG11) to modulate each user's signal individually. This unique approach offers a robust means of data transmission while ensuring minimal interference among users. The second-dimension leverages MCF encoding, introducing two incoherent OCDMA codes: the Zero Cross Correlation (ZCC) code (λc=0) and the ZFD code (λc=1). These codes are thoughtfully designed and simulated, taking into account their cross-correlation properties to guarantee minimal interference and heightened data security. To assess the efficiency of this novel OCDMA encoding system, we implemented simulations with three active users using the Opti system software. At the transmitter end, each user's signal is modulated individually by their designated HG mode (HG00, HG01, HG11), resulting in separate channels. Subsequently, at the multicore fiber, each user's data is encoded with a unique code-word, and they are directed through specific core groups, ensuring data isolation and integrity. In this paper, the BER and eye pattern are examined with respect to different parameters such as data rate and distance. At a distance of 5 km and data rate of 10 Gbit/s, a BER value around 10-70 is achieved. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
50. FS-GDI Based Area Efficient Hamming (11, 7) Encoding.
- Author
-
El-Bendary, Mohsen A. M. and El-Badry, O.
- Subjects
HAMMING codes ,TELECOMMUNICATION systems ,ENCODING ,DELAY lines ,VIDEO coding ,DIGITAL signal processing ,TRANSISTORS - Abstract
This paper proposes an efficient design of Hamming (11, 7) encoder utilising Full Swing-Gate Diffusion Input (FS-GDI) approach in 65 nm technology nano-size node. The proposed design of Hamming codes aims to improve the power and area efficiency through reducing of transistors count by employing power-efficient logic style. Encoding circuits of Hamming code (11, 7) and (7, 4) are designed using the various traditional and proposed approaches. The amount of consumed power, delay time, Power Delay Product (PDP) and hardware simplicity are employed as a metrics for evaluating the efficiency of the proposed designs of encoding circuits. The simulation experiments are executed utilising Cadence Virtuoso simulator package. These experiments revealed that the proposed designs of Hamming encoding circuits achieve delay time reduction by 50.91% and 20% for Hamming codes (7, 4) and (11, 7), respectively. Also, hardware (H/W) simplicity and area efficiency of the circuits are improved by 50% compared to CMOS-based circuits. From the results analysis, the proposed FS-GDI based Hamming encoding circuits achieve efficient power and delay optimising. Hence, the power consumption, delay and area in communications systems and DSP circuits due to encoding process are reduced. The whole performance of DSP circuits can be more power/area efficient. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.