2,964 results on '"Rate–distortion theory"'
Search Results
2. Information-Theoretic Approaches to Differential Privacy.
- Author
-
ÜNSAL, AYŞE and ÖNEN, MELEK
- Published
- 2024
- Full Text
- View/download PDF
3. The Rational Analysis of Memory
- Author
-
Gershman, Samuel J., Kahana, Michael J., book editor, and Wagner, Anthony D., book editor
- Published
- 2024
- Full Text
- View/download PDF
4. The Information‐Processing Perspective on Categorization.
- Author
-
Martínez, Manolo
- Subjects
- *
RATE distortion theory , *PROTOTYPES , *ATOMS - Abstract
Categorization behavior can be fruitfully analyzed in terms of the trade‐off between as high as possible faithfulness in the transmission of information about samples of the classes to be categorized, and as low as possible transmission costs for that same information. The kinds of categorization behaviors we associate with conceptual atoms, prototypes, and exemplars emerge naturally as a result of this trade‐off, in the presence of certain natural constraints on the probabilistic distribution of samples, and the ways in which we measure faithfulness. Beyond the general structure of categorization in these circumstances, the same information‐centered perspective can shed light on other, more concrete properties of human categorization performance, such as the results of certain prominent experiments on supervised categorization. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. Efficient coding in the economics of human brain connectomics.
- Author
-
Lynn, Christopher, Cui, Zaixu, Ciric, Rastko, Baum, Graham, Moore, Tyler, Roalf, David, Detre, John, Gur, Ruben, Gur, Raquel, Satterthwaite, Theodore, Bassett, Dani, and Zhou, Dale
- Subjects
Hierarchical organization ,Integration ,Lossy compression ,Metabolic resources ,Network communication dynamics ,Network hubs ,Rate-distortion theory - Abstract
In systems neuroscience, most models posit that brain regions communicate information under constraints of efficiency. Yet, evidence for efficient communication in structural brain networks characterized by hierarchical organization and highly connected hubs remains sparse. The principle of efficient coding proposes that the brain transmits maximal information in a metabolically economical or compressed form to improve future behavior. To determine how structural connectivity supports efficient coding, we develop a theory specifying minimum rates of message transmission between brain regions to achieve an expected fidelity, and we test five predictions from the theory based on random walk communication dynamics. In doing so, we introduce the metric of compression efficiency, which quantifies the trade-off between lossy compression and transmission fidelity in structural networks. In a large sample of youth (n = 1,042; age 8-23 years), we analyze structural networks derived from diffusion-weighted imaging and metabolic expenditure operationalized using cerebral blood flow. We show that structural networks strike compression efficiency trade-offs consistent with theoretical predictions. We find that compression efficiency prioritizes fidelity with development, heightens when metabolic resources and myelination guide communication, explains advantages of hierarchical organization, links higher input fidelity to disproportionate areal expansion, and shows that hubs integrate information by lossy compression. Lastly, compression efficiency is predictive of behavior-beyond the conventional network efficiency metric-for cognitive domains including executive function, memory, complex reasoning, and social cognition. Our findings elucidate how macroscale connectivity supports efficient coding and serve to foreground communication processes that utilize random walk dynamics constrained by network connectivity.
- Published
- 2022
6. Lossy compression of general random variables.
- Author
-
Riegler, Erwin, Koliander, Günther, and Bölcskei, Helmut
- Subjects
- *
FRACTALS , *RATE distortion theory , *IMAGE compression , *LOSSY data compression , *COMPRESSED sensing , *ERROR functions , *RANDOM variables , *GEOMETRIC quantization - Abstract
This paper is concerned with the lossy compression of general random variables, specifically with rate-distortion theory and quantization of random variables taking values in general measurable spaces such as, e.g. manifolds and fractal sets. Manifold structures are prevalent in data science, e.g. in compressed sensing, machine learning, image processing and handwritten digit recognition. Fractal sets find application in image compression and in the modeling of Ethernet traffic. Our main contributions are bounds on the rate-distortion function and the quantization error. These bounds are very general and essentially only require the existence of reference measures satisfying certain regularity conditions in terms of small ball probabilities. To illustrate the wide applicability of our results, we particularize them to random variables taking values in (i) manifolds, namely, hyperspheres and Grassmannians and (ii) self-similar sets characterized by iterated function systems satisfying the weak separation property. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
7. RDO-Q: Extremely Fine-Grained Channel-Wise Quantization via Rate-Distortion Optimization
- Author
-
Wang, Zhe, Lin, Jie, Geng, Xue, Aly, Mohamed M. Sabry, Chandrasekhar, Vijay, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Avidan, Shai, editor, Brostow, Gabriel, editor, Cissé, Moustapha, editor, Farinella, Giovanni Maria, editor, and Hassner, Tal, editor
- Published
- 2022
- Full Text
- View/download PDF
8. Efficient Data Compression Leads to Categorical Bias inPerception and Perceptual Memory
- Author
-
Bates, Christopher J. and Jacobs, Robert A.
- Subjects
Perception ,memory ,deep neural networks ,rate-distortion theory ,categorical bias - Abstract
Efficient data compression is essential for capacity-limited sys-tems, such as biological memory. We hypothesize that the needfor efficient data compression shapes biological perception andperceptual memory in many of the same ways that it shapesengineered systems. If true, then the tools that engineers useto analyze and design systems, namely rate-distortion theory(RDT), can profitably be used to understand perception andmemory. To date, researchers have used deep neural networksto approximately implement RDT in high-dimensional spaces,but these implementations have been limited to tasks in whichthe sole goal is compression with respect to reconstruction er-ror. Here, we introduce a new deep neural network architecturethat approximately implements RDT in a task-general manner.An important property of our architecture is that it is trained“end-to-end”, operating on raw perceptual input (e.g., pixels)rather than an intermediate level of abstraction, as is the casewith most psychological models. We demonstrate that ourframework can mimick categorical biases in perception andperceptual memory in several ways, and thus generates spe-cific hypotheses that can be tested empirically in future work.
- Published
- 2019
9. Universal Randomized Guessing Subject to Distortion.
- Author
-
Cohen, Asaf and Merhav, Neri
- Subjects
- *
INFORMATION technology security , *RATE distortion theory , *SOURCE code , *CHANNEL coding , *FEATURE extraction , *NOISE measurement - Abstract
In this paper, we consider the problem of guessing a sequence subject to a distortion constraint. Specifically, we assume the following game between Alice and Bob: Alice has a sequence ${x}$ of length $n$. Bob wishes to guess ${x}$ , yet he is satisfied with finding any sequence $\hat {x}$ which is within a given distortion $D$ from $x$. Thus, he successively submits queries to Alice, until receiving an affirmative answer, stating that his guess was within the required distortion. Finding guessing strategies which minimize the number of guesses (the guesswork), and analyzing its properties (e.g., its $\rho $ –th moment) has several applications in information security, source and channel coding. Guessing subject to a distortion constraint is especially useful when considering contemporary biometrically–secured systems, where the “password” which protects the data is not a single, fixed vector but rather a ball of feature vectors centered at some ${x}$ , and any feature vector within the ball results in acceptance. We formally define the guessing problem under distortion in four different setups: memoryless sources, guessing through a noisy channel, sources with memory and individual sequences. We suggest a randomized guessing strategy which is asymptotically optimal for all setups and is five–fold universal, as it is independent of the source statistics, the channel, the moment to be optimized, the distortion measure and the distortion level. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
10. Satisficing in Time-Sensitive Bandit Learning.
- Author
-
Russo, Daniel and Van Roy, Benjamin
- Subjects
RATE distortion theory ,MACHINE learning ,ROBBERS ,ACTIVE learning - Abstract
Much of the recent literature on bandit learning focuses on algorithms that aim to converge on an optimal action. One shortcoming is that this orientation does not account for time sensitivity, which can play a crucial role when learning an optimal action requires much more information than near-optimal ones. Indeed, popular approaches, such as upper-confidence-bound methods and Thompson sampling, can fare poorly in such situations. We consider instead learning a satisficing action, which is near-optimal while requiring less information, and propose satisficing Thompson sampling, an algorithm that serves this purpose. We establish a general bound on expected discounted regret and study the application of satisficing Thompson sampling to linear and infinite-armed bandits, demonstrating arbitrarily large benefits over Thompson sampling. We also discuss the relation between the notion of satisficing and the theory of rate distortion, which offers guidance on the selection of satisficing actions. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
11. Interpreting rate-distortion of variational autoencoder and using model uncertainty for anomaly detection.
- Author
-
Park, Seonho, Adosoglou, George, and Pardalos, Panos M.
- Abstract
Building a scalable machine learning system for unsupervised anomaly detection via representation learning is highly desirable. One of the prevalent methods is using a reconstruction error of variational autoencoder (VAE) by maximizing the evidence lower bound. We revisit VAE from the perspective of information theory to provide some theoretical foundations on using the reconstruction error and finally arrive at a simpler yet effective model for anomaly detection. In addition, to enhance the effectiveness of detecting anomalies, we incorporate a practical model uncertainty measure into the anomaly score. We show empirically the competitive performance of our approach on benchmark data sets. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
12. Linear Computation Coding: A Framework for Joint Quantization and Computing †.
- Author
-
Müller, Ralf Reiner, Gäde, Bernhard Martin Wilhelm, and Bereyhi, Ali
- Subjects
- *
LINEAR codes , *LOSSY data compression , *RATE distortion theory , *ARTIFICIAL neural networks , *MATRIX multiplications , *ARITHMETIC - Abstract
Here we introduce the new concept of computation coding. Similar to how rate-distortion theory is concerned with the lossy compression of data, computation coding deals with the lossy computation of functions. Particularizing to linear functions, we present an algorithmic approach to reduce the computational cost of multiplying a constant matrix with a variable vector, which requires neither a matrix nor vector having any particular structure or statistical properties. The algorithm decomposes the constant matrix into the product of codebook and wiring matrices whose entries are either zero or signed integer powers of two. For a typical application like the implementation of a deep neural network, the proposed algorithm reduces the number of required addition units several times. To achieve the accuracy of 16-bit signed integer arithmetic for 4k-vectors, no multipliers and only 1.5 adders per matrix entry are needed. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
13. Evaluating Ecohydrological Model Sensitivity to Input Variability with an Information-Theory-Based Approach.
- Author
-
Farahani, Mozhgan A., Vahid, Alireza, and Goodwell, Allison E.
- Subjects
- *
RATE distortion theory , *EDDY flux , *HEAT flux , *VAPOR pressure , *WIND speed , *DATA binning - Abstract
Ecohydrological models vary in their sensitivity to forcing data and use available information to different extents. We focus on the impact of forcing precision on ecohydrological model behavior particularly by quantizing, or binning, time-series forcing variables. We use rate-distortion theory to quantize time-series forcing variables to different precisions. We evaluate the effect of different combinations of quantized shortwave radiation, air temperature, vapor pressure deficit, and wind speed on simulated heat and carbon fluxes for a multi-layer canopy model, which is forced and validated with eddy covariance flux tower observation data. We find that the model is more sensitive to radiation than meteorological forcing input, but model responses also vary with seasonal conditions and different combinations of quantized inputs. While any level of quantization impacts carbon flux similarly, specific levels of quantization influence heat fluxes to different degrees. This study introduces a method to optimally simplify forcing time series, often without significantly decreasing model performance, and could be applied within a sensitivity analysis framework to better understand how models use available information. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
14. Optimal Rate-Distortion-Leakage Tradeoff for Single-Server Information Retrieval.
- Author
-
Yakimenka, Yauhen, Lin, Hsuan-Yin, Rosnes, Eirik, and Kliewer, Jorg
- Subjects
INFORMATION retrieval ,RATE distortion theory ,PRIVACY - Abstract
Private information retrieval protocols guarantee that a user can privately and losslessly retrieve a single file from a database stored across multiple servers. In this work, we propose to simultaneously relax the conditions of perfect retrievability and privacy in order to obtain improved download rates when all files are stored uncoded on a single server. Information leakage is measured in terms of the average success probability for the server of correctly guessing the identity of the desired file. The main findings are: i) The derivation of the optimal tradeoff between download rate, distortion, and information leakage when the file size is infinite. Closed-form expressions of the optimal tradeoff for the special cases of “no-leakage” and “no-privacy” are also given. ii) A novel approach based on linear programming (LP) to construct schemes for a finite file size and an arbitrary number of files. The proposed LP approach can be leveraged to find provably optimal schemes with corresponding closed-form expressions for the rate-distortion-leakage tradeoff when the database contains at most four bits. Finally, for a database that contains 320 bits, we compare two construction methods based on the LP approach with a nonconstructive scheme downloading subsets of files using a finite-length lossy compressor based on random coding. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
15. Modelling the utility of group testing for public health surveillance
- Author
-
Günther Koliander and Georg Pichler
- Subjects
Group testing ,Public health surveillance ,Source coding ,Rate-distortion theory ,Infectious and parasitic diseases ,RC109-216 - Abstract
In epidemic or pandemic situations, resources for testing the infection status of individuals may be scarce. Although group testing can help to significantly increase testing capabilities, the (repeated) testing of entire populations can exceed the resources of any country. We thus propose an extension of the theory of group testing that takes into account the fact that definitely specifying the infection status of each individual is impossible. Our theory builds on assigning to each individual an infection status (healthy/infected), as well as an associated cost function for erroneous assignments. This cost function is versatile, e.g., it could take into account that false negative assignments are worse than false positive assignments and that false assignments in critical areas, such as health care workers, are more severe than in the general population. Based on this model, we study the optimal use of a limited number of tests to minimize the expected cost. More specifically, we utilize information-theoretic methods to give a lower bound on the expected cost and describe simple strategies that can significantly reduce the expected cost over currently known strategies. A detailed example is provided to illustrate our theory.
- Published
- 2021
- Full Text
- View/download PDF
16. Exploring the Cost Function in Color Perception and Memory:An Information-Theoretic Model of Categorical Effects in Color Matching
- Author
-
Sims, Chris R., Ma, Zheng, Allred, Sarah R., Lerch, Rachel A., and Flombaum, Jonathan I.
- Subjects
color perception ,visual working memory ,infor-mation theory ,rate–distortion theory - Abstract
Recent evidence indicates that color categories can exert astrong influence over color matching in both perception andmemory. We explore this phenomenon by analyzing the costfunction for perceptual error. Our analysis is developed withinthe mathematical framework of rate–distortion theory. Ac-cording to our approach, the goal of perception is to minimizethe expected cost of error while subject to a strong constrainton the capacity of perceptual processing. We propose that thecost function in color perception is defined by the sum of twocomponents: a metric cost associated with the magnitude of er-ror in color space, and a cost associated with perceptual errorsthat cross color category boundaries. A computational modelembodying this assumption is shown to produce an excellent fitto empirical data. The results generally suggest that what ap-pear as ‘errors’ in working memory performance may reflectreasonable and systematic behaviors in the context of costs.
- Published
- 2016
17. Optimal Causal Rate-Constrained Sampling for a Class of Continuous Markov Processes.
- Author
-
Guo, Nian and Kostina, Victoria
- Subjects
- *
CONTINUOUS processing , *MARKOV processes , *ORNSTEIN-Uhlenbeck process , *STOCHASTIC processes , *TIMESTAMPS , *COST control - Abstract
Consider the following communication scenario. An encoder observes a stochastic process and causally decides when and what to transmit about it, under a constraint on the expected number of bits transmitted per second. A decoder uses the received codewords to causally estimate the process in real time. The encoder and the decoder are synchronized in time. For a class of continuous Markov processes satisfying regularity conditions, we find the optimal encoding and decoding policies that minimize the end-to-end estimation mean-square error under the rate constraint. We show that the optimal encoding policy transmits a 1-bit codeword once the process innovation passes one of two thresholds. The optimal decoder noiselessly recovers the last sample from the 1-bit codewords and codeword-generating time stamps, and uses it to decide the running estimate of the current process, until the next codeword arrives. In particular, we show the optimal causal code for the Ornstein-Uhlenbeck process and calculate its distortion-rate function. Furthermore, we show that the optimal causal code also minimizes the mean-square cost of a continuous-time control system driven by a continuous Markov process and controlled by an additive control signal. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
18. Robust Decentralized and Distributed Estimation of a Correlated Parameter Vector in MIMO-OFDM Wireless Sensor Networks.
- Author
-
Rajput, Kunwar Pritiraj, Ahmed, Mohammad Faisal, Venkategowda, Naveen K. D., Jagannatham, Aditya K., Sharma, Govind, and Hanzo, Lajos
- Subjects
- *
SENSOR networks , *WIRELESS sensor networks , *ORTHOGONAL frequency division multiplexing , *PARAMETER estimation , *RATE distortion theory - Abstract
An optimal precoder design is conceived for the decentralized estimation of an unknown spatially as well as temporally correlated parameter vector in a multiple-input multiple-output (MIMO) orthogonal frequency division multiplexing (OFDM) based wireless sensor network (WSN). Furthermore, exploiting the temporal correlation present in the parameter vector, a rate-distortion theory based framework is developed for the optimal quantization of the sensor observations so that the resultant distortion is minimized for a given bit-budget. Subsequently, optimal precoders are also developed that minimize the sum-MSE (SMSE) for the scenario of transmitting quantized observations. In order to reduce the computational complexity of the decentralized framework, distributed precoder design algorithms are also developed which design precoders using the consensus based alternating direction method of multipliers (ADMM), wherein each SN determines its precoders without any central coordination by the fusion center. Finally, new robust MIMO precoder designs are proposed for practical scenarios operating in the face of channel state information (CSI) uncertainty. Our simulation results demonstrate the improved performance of the proposed schemes and corroborate our analytical formulations. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
19. Optimal Causal Rate-Constrained Sampling of the Wiener Process.
- Author
-
Guo, Nian and Kostina, Victoria
- Subjects
- *
WIENER processes , *SAMPLING (Process) , *CHANNEL coding , *COST control , *RATE distortion theory - Abstract
We consider the following communication scenario. An encoder causally observes the Wiener process and decides when and what to transmit about it. A decoder estimates the process using causally received codewords in real time. We determine the causal encoding and decoding policies that jointly minimize the mean-square estimation error, under the long-term communication rate constraint of $R$ bits per second. We show that an optimal encoding policy can be implemented as a causal sampling policy followed by a causal compressing policy. We prove that the optimal encoding policy samples the Wiener process once the innovation passes either $\sqrt{\frac{1}{R}}$ or $-\sqrt{\frac{1}{R}}$ and compresses the sign of innovation (SOI) using a 1-bit codeword. The SOI coding scheme achieves the operational distortion-rate function, which is equal to $D^{\mathrm{op}}(R)=\frac{1}{6R}$. Surprisingly, this is significantly better than the distortion-rate tradeoff achieved in the limit of infinite delay by the best noncausal code. This is because the SOI coding scheme leverages the free timing information supplied by the zero-delay channel between the encoder and the decoder. The key to unlocking that gain is the event-triggered nature of the SOI sampling policy. In contrast, the distortion-rate tradeoffs achieved with deterministic sampling policies are much worse: we prove that the causal informational distortion-rate function in that scenario is as high as $D_{\mathrm{DET}}(R) = \frac{5}{6R}$. It is achieved by the uniform sampling policy with the sampling interval $\frac{1}{R}$. In either case, the optimal strategy is to sample the process as fast as possible and to transmit 1-bit codewords to the decoder without delay. We show that the SOI coding scheme also minimizes the mean-square cost of a continuous-time control system driven by the Wiener process and controlled via rate-constrained impulses. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
20. Transfer-Entropy-Regularized Markov Decision Processes.
- Author
-
Tanaka, Takashi, Sandberg, Henrik, and Skoglund, Mikael
- Subjects
- *
MARKOV processes , *SYSTEMS theory , *STOCHASTIC processes , *NONLINEAR equations , *ENTROPY , *TRAJECTORY optimization , *NONEQUILIBRIUM thermodynamics - Abstract
We consider the framework of transfer-entropy-regularized Markov decision process (TERMDP) in which the weighted sum of the classical state-dependent cost and the transfer entropy from the state random process to the control input process is minimized. Although TERMDPs are generally formulated as nonconvex optimization problems, an analytical necessary optimality condition can be expressed as a finite set of nonlinear equations, based on which an iterative forward–backward computational procedure similar to the Arimoto–Blahut algorithm is developed. It is shown that every limit point of the sequence generated by the proposed algorithm is a stationary point of the TERMDP. Applications of TERMDPs are discussed in the context of networked control systems theory and nonequilibrium thermodynamics. The proposed algorithm is applied to an information-constrained maze navigation problem, whereby we study how the price of information qualitatively alters the optimal decision polices. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
21. Linear Computation Coding: A Framework for Joint Quantization and Computing
- Author
-
Ralf Reiner Müller, Bernhard Martin Wilhelm Gäde, and Ali Bereyhi
- Subjects
approximate computing ,computational complexity ,estimation error ,fixed-point arithmetic ,linear systems ,rate-distortion theory ,Industrial engineering. Management engineering ,T55.4-60.8 ,Electronic computers. Computer science ,QA75.5-76.95 - Abstract
Here we introduce the new concept of computation coding. Similar to how rate-distortion theory is concerned with the lossy compression of data, computation coding deals with the lossy computation of functions. Particularizing to linear functions, we present an algorithmic approach to reduce the computational cost of multiplying a constant matrix with a variable vector, which requires neither a matrix nor vector having any particular structure or statistical properties. The algorithm decomposes the constant matrix into the product of codebook and wiring matrices whose entries are either zero or signed integer powers of two. For a typical application like the implementation of a deep neural network, the proposed algorithm reduces the number of required addition units several times. To achieve the accuracy of 16-bit signed integer arithmetic for 4k-vectors, no multipliers and only 1.5 adders per matrix entry are needed.
- Published
- 2022
- Full Text
- View/download PDF
22. Evaluating Ecohydrological Model Sensitivity to Input Variability with an Information-Theory-Based Approach
- Author
-
Mozhgan A. Farahani, Alireza Vahid, and Allison E. Goodwell
- Subjects
sensitivity analysis ,quantization ,information theory ,rate-distortion theory ,ecohydrological modeling ,Science ,Astrophysics ,QB460-466 ,Physics ,QC1-999 - Abstract
Ecohydrological models vary in their sensitivity to forcing data and use available information to different extents. We focus on the impact of forcing precision on ecohydrological model behavior particularly by quantizing, or binning, time-series forcing variables. We use rate-distortion theory to quantize time-series forcing variables to different precisions. We evaluate the effect of different combinations of quantized shortwave radiation, air temperature, vapor pressure deficit, and wind speed on simulated heat and carbon fluxes for a multi-layer canopy model, which is forced and validated with eddy covariance flux tower observation data. We find that the model is more sensitive to radiation than meteorological forcing input, but model responses also vary with seasonal conditions and different combinations of quantized inputs. While any level of quantization impacts carbon flux similarly, specific levels of quantization influence heat fluxes to different degrees. This study introduces a method to optimally simplify forcing time series, often without significantly decreasing model performance, and could be applied within a sensitivity analysis framework to better understand how models use available information.
- Published
- 2022
- Full Text
- View/download PDF
23. A Trellis-Coded Quantization Approach to Transmitting Correlated Gaussian Sources Over a Fading MAC Without Transmitter-CSI.
- Author
-
Yahampath, Pradeepa and Illangakoon, Chathura
- Subjects
- *
TRANSMITTERS (Communication) , *CHANNEL coding , *VECTOR quantization , *SIGNAL-to-noise ratio , *COMMUNICATION barriers , *RATE distortion theory - Abstract
It is known that source-channel separation is sub-optimal for communicating correlated Gaussian sources over a Gaussian multiple access channel (GMAC). Considering a two-to-one GMAC which undergoes Rayleigh block-fading, we present a novel approach to practical joint source-channel coding for the scenario in which the common receiver has instantaneous CSI but only the CSI distribution is available to the individual transmitters. This approach, referred to as source-channel trellis-coded vector quantization (SC-TCVQ), simply relies on using TCVQs as fixed-rate source-channel encoders. One key issue is the optimization of the TCVQ codebooks to the mean channel signal-to-noise ratio (CSNR), and to this end, we present an analytical method to obtain the rates required for codebook design. Another key issue is the joint estimation of the sources at the receiver, for which we present a detector-estimator based on the Cartesian product of the two encoder-trellises. Simulation results show that the proposed SC-TCVQ codes, in some cases, can even beat the asymptotic performance bound for a separate source-channel code consisting of a distributed vector quantizer and capacity achieving channel codes. SC-TCVQ appears to be the best known practical code design to date for the given communication problem. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
24. Deep Neural Network Approximation Theory.
- Author
-
Elbrachter, Dennis, Perekrestenko, Dmytro, Grohs, Philipp, and Bolcskei, Helmut
- Subjects
- *
APPROXIMATION theory , *SMOOTHNESS of functions , *BESOV spaces , *APPROXIMATION error , *MACHINE learning , *ELECTRIC network topology - Abstract
This paper develops fundamental limits of deep neural network learning by characterizing what is possible if no constraints are imposed on the learning algorithm and on the amount of training data. Concretely, we consider Kolmogorov-optimal approximation through deep neural networks with the guiding theme being a relation between the complexity of the function (class) to be approximated and the complexity of the approximating network in terms of connectivity and memory requirements for storing the network topology and the associated quantized weights. The theory we develop establishes that deep networks are Kolmogorov-optimal approximants for markedly different function classes, such as unit balls in Besov spaces and modulation spaces. In addition, deep networks provide exponential approximation accuracy—i.e., the approximation error decays exponentially in the number of nonzero weights in the network—of the multiplication operation, polynomials, sinusoidal functions, and certain smooth functions. Moreover, this holds true even for one-dimensional oscillatory textures and the Weierstrass function—a fractal function, neither of which has previously known methods achieving exponential approximation accuracy. We also show that in the approximation of sufficiently smooth functions finite-width deep networks require strictly smaller connectivity than finite-depth wide networks. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
25. Nonstationary Gauss-Markov Processes: Parameter Estimation and Dispersion.
- Author
-
Tian, Peida and Kostina, Victoria
- Subjects
- *
PARAMETER estimation , *MARKOV processes , *RANDOM variables , *ERROR probability , *DISPERSION (Chemistry) , *COVARIANCE matrices - Abstract
This paper provides a precise error analysis for the maximum likelihood estimate $\hat {\textit a}_{\text {ML}}({\textit u}_{1}^{\textit n})$ of the parameter ${\textit a}$ given samples ${\textit u}_{1}^{\textit n} = ({\textit u}_{1}, \ldots, {\textit u}_{\textit n})'$ drawn from a nonstationary Gauss-Markov process ${\textit U}_{\textit i} = {\textit a} {\textit U}_{\textit i-1} + {\textit Z}_{\textit i},\,\,{\textit i}\geq 1$ , where ${\textit U}_{0} = 0$ , ${\textit a}> 1$ , and ${\textit Z}_{\textit i}$ ’s are independent Gaussian random variables with zero mean and variance $\sigma ^{2}$. We show a tight nonasymptotic exponentially decaying bound on the tail probability of the estimation error. Unlike previous works, our bound is tight already for a sample size of the order of hundreds. We apply the new estimation bound to find the dispersion for lossy compression of nonstationary Gauss-Markov sources. We show that the dispersion is given by the same integral formula that we derived previously for the asymptotically stationary Gauss-Markov sources, i.e., $|{\textit a}| < 1$. New ideas in the nonstationary case include separately bounding the maximum eigenvalue (which scales exponentially) and the other eigenvalues (which are bounded by constants that depend only on a) of the covariance matrix of the source sequence, and new techniques in the derivation of our estimation error bound. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
26. Minimizing the average achievable distortion using multi-layer coding approach in two-hop networks.
- Author
-
Khodam Hoseini, Sayed Ali, Akhlaghi, Soroush, and Baghani, Mina
- Abstract
Minimizing the average achievable distortion (AAD) of a Gaussian source at the destination of a two-hop block fading relay channel is studied in this paper. The communication is carried out through the use of a Decode and Forward (DF) relay with no source-destination direct links. The associated receivers of both hops are assumed to be aware of the corresponding channel state information (CSI), while the transmitters are unaware of their corresponding CSI. The current paper explores the effectiveness of incorporating the successive refinement source coding together with multi-layer channel coding in minimizing the AAD. In this regard, the closed form and optimal power allocation policy across code layers of the second hop is derived, and using a proper curve fitting approach, a close-to-optimal power allocation policy associated with the first hop is devised. It is numerically shown that the DF strategy closely follows the Amplify and Forward (AF) relaying, while there is a sizable gap between the AAD of multi-layer coding and single-layer source coding. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
27. Bayesian Reinforcement Learning With Limited Cognitive Load.
- Author
-
Arumugam D, Ho MK, Goodman ND, and Van Roy B
- Abstract
All biological and artificial agents must act given limits on their ability to acquire and process information. As such, a general theory of adaptive behavior should be able to account for the complex interactions between an agent's learning history, decisions, and capacity constraints. Recent work in computer science has begun to clarify the principles that shape these dynamics by bridging ideas from reinforcement learning , Bayesian decision-making , and rate-distortion theory . This body of work provides an account of capacity-limited Bayesian reinforcement learning , a unifying normative framework for modeling the effect of processing constraints on learning and action selection. Here, we provide an accessible review of recent algorithms and theoretical results in this setting, paying special attention to how these ideas can be applied to studying questions in the cognitive and behavioral sciences., Competing Interests: Competing Interests: The authors declare no conflict of interests., (© 2024 Massachusetts Institute of Technology.)
- Published
- 2024
- Full Text
- View/download PDF
28. A Theory for Optimizing Insecure Internetwork Connections.
- Author
-
Das, Pankaz, Shuvro, Rezoan A., Rahnamay-Naeini, Mahshid, Povinelli, Kassie, Ghani, Nasir, and Hayat, Majeed M.
- Abstract
While internetwork connections among private and public networks are inevitable and essential for enhancing the data rate and range of communication among networks, they introduce new vulnerabilities and risks of security breach to the interconnected networks. In this article, a novel information-theoretic model is proposed to analyze the tradeoff between network security and data-rate enhancement considering homogeneous and heterogeneous internetwork connections between two networks. Various patterns of internetwork connections among gateway nodes are studied. The implications of internetwork connections on data manipulation due to security breach by attackers are investigated by modeling the internetwork connections as a noisy communication channel. Specifically, the fundamental limit of the maximum achievable capacity afforded by internetwork connections is characterized, while considering the impact of security vulnerabilities that the internetwork connections introduce. For a nominal data-rate below the fundamental limit, the effect of security breach can be diminished to an arbitrarily low level by channel coding. Notably, any nominal data-rate above the fundamental limit is not achievable; hence, a packet-loss distortion is introduced to keep the data-rate below the fundamental limit. Moreover, the fundamental limit is used to find the optimal number of internetwork connections between two networks under prescribed data-rate and packet-loss distortion constraints. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
29. Consequences of a Functional Account of Information.
- Author
-
Mann, Stephen Francis
- Abstract
This paper aims to establish several interconnected points. First, a particular interpretation of the mathematical definition of information, known as the causal interpretation, is supported largely by misunderstandings of the engineering context from which it was taken. A better interpretation, which makes the definition and quantification of information relative to the function of its user, is outlined. The first half of the paper is given over to introducing communication theory and its competing interpretations. The second half explores three consequences of the main thesis. First, a popular claim that the quantification of information in a signal is irrelevant for the meaning of that signal is exposed as fallacious. Second, a popular distinction between causal and semantic information is shown to be misleading, and I argue it should be replaced with a related distinction between natural and intentional signs. Finally, I argue that recent empirical work from microbiology and cognitive science drawing on resources of mathematical communication theory is best interpreted by the functional account. Overall, a functional approach is shown to be both theoretically and empirically well-supported. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
30. Modeling Generalized Rate-Distortion Functions.
- Author
-
Duanmu, Zhengfang, Liu, Wentao, Li, Zhuoran, and Wang, Zhou
- Subjects
- *
DIGITAL video , *CHARACTERISTIC functions , *INFORMATION measurement , *PARAMETRIC modeling , *RATE distortion theory , *CURVES - Abstract
Many multimedia applications require precise understanding of the rate-distortion characteristics measured by the function relating visual quality to media attributes, for which we term it the generalized rate-distortion (GRD) function. In this study, we explore the GRD behavior of compressed digital videos in a two-dimensional space of bitrate and resolution. Our analysis on a large-scale video dataset reveals that empirical parametric models are systematically biased while exhaustive search methods require excessive computation time to depict the GRD surfaces. By exploiting the properties that all GRD functions share, we develop an Robust Axial-Monotonic Clough-Tocher (RAMCT) interpolation method to model the GRD function. This model allows us to accurately reconstruct the complete GRD function of a source video content from a moderate number of measurements. To further reduce the computational cost, we present a novel sampling scheme based on a probabilistic model and an information measure. The proposed sampling method constructs a sequence of quality queries by minimizing the overall informativeness in the remaining samples. Experimental results show that the proposed algorithm significantly outperforms state-of-the-art approaches in accuracy and efficiency. Finally, we demonstrate the usage of the proposed model in three applications: rate-distortion curve prediction, per-title encoding profile generation, and video encoder comparison. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
31. Analysis of Affine Motion-Compensated Prediction in Video Coding.
- Author
-
Meuel, Holger and Ostermann, Jorn
- Subjects
- *
VIDEO coding , *VIDEO compression , *PROBABILITY density function , *BIT rate , *RATE distortion theory , *TRANSLATIONAL motion , *DATA compression - Abstract
Motion-compensated prediction is used in video coding standards like High Efficiency Video Coding (HEVC) as one key element of data compression. Commonly, a purely translational motion model is employed. In order to also cover non-translational motion types like rotation or scaling (zoom), e. g. contained in aerial video sequences such as captured from unmanned aerial vehicles (UAV), an affine motion model can be applied. In this work, a model for affine motion-compensated prediction in video coding is derived. Using the rate-distortion theory and the displacement estimation error caused by inaccurate affine motion parameter estimation, the minimum required bit rate for encoding the prediction error is determined. In this model, the affine transformation parameters are assumed to be affected by statistically independent estimation errors, which all follow a zero-mean Gaussian distributed probability density function (pdf). The joint pdf of the estimation errors is derived and transformed into the pdfof the location-dependent displacement estimation error in the image. The latter is related to the minimum required bit rate for encoding the prediction error. Similar to the derivations of the fully affine motion model, a four-parameter simplified affine model is investigated. Both models are of particular interest since they are considered for the upcoming video coding standard Versatile Video Coding (VVC) succeeding HEVC. Both models provide valuable information about the minimum bit rate for encoding the prediction error as a function of affine estimation accuracies. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
32. Logarithmic Pyramid Vector Quantization—Design and Theoretical Analysis.
- Author
-
Sohn, Christian, Kruger, Hauke, and Vary, Peter
- Subjects
- *
VECTOR quantization , *BIT rate , *SIGNAL-to-noise ratio - Abstract
Vector quantization is an integral part of modern speech and audio codecs. This study proposes the logarithmic pyramid vector quantizer (LPVQ), which is a gain-shape vector quantizer specifically designed for the quantization of Laplace distributed memoryless sources. The objective of the study is a theoretical analysis of the LPVQ: We determine the distortion of the shape quantizer with respect to rate and vector dimension for the quantization of an i.i.d. Laplace source. Furthermore, we derive formulas for the quantization signal-to-noise ratio (SNR) of the shape quantizer and the quantization SNR of the LPVQ, giving reliable results for an effective bit rate per vector coordinate of 2bits and higher. We study the SNR-behavior of the LPVQ for infinite dimensions and compare it with the maximal achievable SNR according to the rate-distortion bound. After having proposed a strategy for the allocation of the bit rate for the gain quantization and the quantization of the shape vector, respectively, we verify the derived formulas by simulations. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
33. Overhead-Performance Tradeoff for MISO Broadcast Channels With Zero-Forcing Precoder.
- Author
-
Liu, Guangyi, Feng, Hao, Wang, Qi, and Cimini, Leonard J.
- Abstract
By applying a rate-distortion approach, we investigate the relationship between the amount of overhead and the system performance from an information-theoretic point of view. Though the purpose of rate-distortion theory is to find a lower bound on lossy source coding problems, the concepts of distortion and rate can be extended to communication performance measures and the quality of feedback overhead, respectively. In this article, we study the overhead-performance tradeoff for a downlink MU-MISO channel system with limited feedback. The required feedback bits for characterizing the channel state information is represented as a function of tolerable rate loss. The proposed method to derive this tradeoff is valid for any number of transmit antennas and users, and can help in designing practical systems where the impact of channel feedback overhead cannot be neglected. Comparing the obtained rate-distortion curves with vector quantization, it can be concluded that, for each user, by feeding back different numbers of bits for different channel power gains, the optimal performance can be achieved. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
34. Signaling games in multiple dimensions: Geometric properties of equilibrium solutions
- Author
-
Yuksel, Serdar, Kazikli, Ertan, Gezici, Sinan, Yuksel, Serdar, Kazikli, Ertan, and Gezici, Sinan
- Abstract
Signaling game problems investigate communication scenarios where encoder(s) and decoder(s) have misaligned objectives due to the fact that they either employ different cost functions or have inconsistent priors. This problem has been studied in the literature for scalar sources under various setups. In this paper, we consider multi-dimensional sources under quadratic criteria in the presence of a bias leading to a mismatch in the criteria, where we show that the generalization from the scalar setup is more than technical. We show that the Nash equilibrium solutions lead to structural richness due to the subtle geometric analysis the problem entails, with consequences in both system design, the presence of linear Nash equilibria, and an information theoretic problem formulation. We first provide a set of geometric conditions that must be satisfied in equilibrium considering any multi-dimensional source. Then, we consider independent and identically distributed sources and characterize necessary and sufficient conditions under which an informative linear Nash equilibrium exists. These conditions involve the bias vector that leads to misaligned costs. Depending on certain conditions related to the bias vector, the existence of linear Nash equilibria requires sources with a Gaussian or a symmetric density. Moreover, in the case of Gaussian sources, our results have a rate- distortion theoretic implication that achievable rates and distortions in the considered game theoretic setup can be obtained from its team theoretic counterpart.; COPY; 2023 Elsevier Ltd. All rights reserved., Natural Sciences and Engineering Research Council (NSERC) of Canada; Scientific and Technological Research Council of Turkey (TUEBITAK), This work was supported in part by the Natural Sciences and Engineering Research Council (NSERC) of Canada and in part by the Scientific and Technological Research Council of Turkey (TUEBITAK) .
- Published
- 2023
35. Signaling games in multiple dimensions: Geometric properties of equilibrium solutions
- Author
-
Gezici, Sinan, Kazikli, Ertan, Yuksel, Serdar, Gezici, Sinan, Kazikli, Ertan, and Yuksel, Serdar
- Abstract
Signaling game problems investigate communication scenarios where encoder(s) and decoder(s) have misaligned objectives due to the fact that they either employ different cost functions or have inconsistent priors. This problem has been studied in the literature for scalar sources under various setups. In this paper, we consider multi-dimensional sources under quadratic criteria in the presence of a bias leading to a mismatch in the criteria, where we show that the generalization from the scalar setup is more than technical. We show that the Nash equilibrium solutions lead to structural richness due to the subtle geometric analysis the problem entails, with consequences in both system design, the presence of linear Nash equilibria, and an information theoretic problem formulation. We first provide a set of geometric conditions that must be satisfied in equilibrium considering any multi-dimensional source. Then, we consider independent and identically distributed sources and characterize necessary and sufficient conditions under which an informative linear Nash equilibrium exists. These conditions involve the bias vector that leads to misaligned costs. Depending on certain conditions related to the bias vector, the existence of linear Nash equilibria requires sources with a Gaussian or a symmetric density. Moreover, in the case of Gaussian sources, our results have a rate- distortion theoretic implication that achievable rates and distortions in the considered game theoretic setup can be obtained from its team theoretic counterpart.; COPY; 2023 Elsevier Ltd. All rights reserved., Natural Sciences and Engineering Research Council (NSERC) of Canada; Scientific and Technological Research Council of Turkey (TUEBITAK), This work was supported in part by the Natural Sciences and Engineering Research Council (NSERC) of Canada and in part by the Scientific and Technological Research Council of Turkey (TUEBITAK) .
- Published
- 2023
36. Secure Representation of Images Using Multi-layer Compression
- Author
-
Ferdowsi, Sohrab, Voloshynovskiy, Sviatoslav, Kostadinov, Dimche, Korytkowski, Marcin, Scherer, Rafał, Goebel, Randy, Series editor, Tanaka, Yuzuru, Series editor, Wahlster, Wolfgang, Series editor, Rutkowski, Leszek, editor, Korytkowski, Marcin, editor, Scherer, Rafal, editor, Tadeusiewicz, Ryszard, editor, Zadeh, Lotfi A., editor, and Zurada, Jacek M., editor
- Published
- 2015
- Full Text
- View/download PDF
37. Distortion Minimization Hashing
- Author
-
Tongtong Yuan, Weihong Deng, and Jiani Hu
- Subjects
Machine learning ,image retrieval ,hashing quantization ,rate-distortion theory ,iterative optimization ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
Application of the hashing method to large-scale image retrieval has drawn much attention because of the high efficiency and favorable accuracy of the method. Its related research generally involves two basic problems: similarity-preserving projection and information-preserving quantization. Most previous works focused on learning projection approaches, while the importance of quantization strategies was ignored. Although several hashing quantization models have been recently proposed to improve retrieval performance by assigning multiple bits to projected directions, these models still suffer from suboptimal results, as the critical information loss that occurs in the quantization procedure is not considered. In this paper, to construct an effective quantization model, we utilize rate-distortion theory in the hashing quantization procedure and minimize the distortion to reduce the information loss. Furthermore, combining principal component analysis with our quantization strategy, we present a quantization-based hashing method named distortion minimization hashing. Extensive experiments involving one synthetic data set and three image data sets demonstrate the superior performance of our proposed methods over several quantization techniques and state-of-the-art hashing methods.
- Published
- 2017
- Full Text
- View/download PDF
38. Successive Refinement of Abstract Sources.
- Author
-
Kostina, Victoria and Tuncel, Ertem
- Subjects
- *
VIDEO coding , *RATE distortion theory , *RANDOM variables , *ENTROPY (Information theory) - Abstract
In successive refinement of information, the decoder refines its representation of the source progressively as it receives more encoded bits. The rate-distortion region of successive refinement describes the minimum rates required to attain the target distortions at each decoding stage. In this paper, we derive a parametric characterization of the rate-distortion region for successive refinement of abstract sources. Our characterization extends Csiszár’s result to successive refinement, and generalizes a result by Tuncel and Rose, applicable for finite alphabet sources, to abstract sources. This characterization spawns a family of outer bounds to the rate-distortion region. It also enables an iterative algorithm for computing the rate-distortion region, which generalizes Blahut’s algorithm to successive refinement. Finally, it leads a new nonasymptotic converse bound. In all the scenarios where the dispersion is known, this bound is second-order optimal. In our proof technique, we avoid Karush–Kuhn–Tucker conditions of optimality, and we use basic tools of probability theory. We leverage the Donsker–Varadhan lemma for the minimization of relative entropy on abstract probability spaces. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
39. The Dispersion of the Gauss–Markov Source.
- Author
-
Tian, Peida and Kostina, Victoria
- Subjects
- *
PARAMETER estimation , *RANDOM variables , *RANDOM noise theory , *GAUSSIAN processes , *DISPERSION (Chemistry) , *INTEGRAL representations - Abstract
The Gauss–Markov source produces $U_{i} = aU_{i-1} + Z_{i}$ for $i\geq 1$ , where $U_{0} = 0$ , $|a|< 1$ and $Z_{i}\sim \mathcal {N}(0, \sigma ^{2})$ are i.i.d. Gaussian random variables. We consider lossy compression of a block of $n$ samples of the Gauss–Markov source under squared error distortion. We obtain the Gaussian approximation for the Gauss–Markov source with excess-distortion criterion for any distortion $d>0$ , and we show that the dispersion has a reverse waterfilling representation. This is the first finite blocklength result for lossy compression of sources with memory. We prove that the finite blocklength rate-distortion function $R(n,d,\epsilon)$ approaches the rate-distortion function $\mathbb {R}(d)$ as $R(n,d,\epsilon) = \mathbb {R}(d) + \sqrt {\frac {V(d)}{n}}Q^{-1}(\epsilon) + o\left ({\frac {1}{\sqrt {n}}}\right)$ , where $V(d)$ is the dispersion, $\epsilon \in (0,1)$ is the excess-distortion probability, and $Q^{-1}$ is the inverse $Q$ -function. We give a reverse waterfilling integral representation for the dispersion $V(d)$ , which parallels that of the rate-distortion functions for Gaussian processes. Remarkably, for all $0 < d\leq \frac {\sigma ^{2}}{(1+|a|)^{2}}$ , $R(n,d,\epsilon)$ of the Gauss–Markov source coincides with that of $Z_{i}$ , the i.i.d. Gaussian noise driving the process, up to the second-order term. Among novel technical tools developed in this paper is a sharp approximation of the eigenvalues of the covariance matrix of $n$ samples of the Gauss–Markov source, and a construction of a typical set using the maximum likelihood estimate of the parameter $a$ based on $n$ observations. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
40. Raman spectroscopic histology using machine learning for nonalcoholic fatty liver disease.
- Author
-
Helal, Khalifa Mohammad, Taylor, James Nicholas, Cahyadi, Harsono, Okajima, Akira, Tabata, Koji, Itoh, Yoshito, Tanaka, Hideo, Fujita, Katsumasa, Harada, Yoshinori, and Komatsuzaki, Tamiki
- Subjects
- *
FATTY liver , *MACHINE learning , *HISTOLOGY , *VITAMIN A , *RAT diseases - Abstract
Histopathology requires the expertise of specialists to diagnose morphological features of cells and tissues. Raman imaging can provide additional biochemical information to benefit histological disease diagnosis. Using a dietary model of nonalcoholic fatty liver disease in rats, we combine Raman imaging with machine learning and information theory to evaluate cellular‐level information in liver tissue samples. After increasing signal‐to‐noise ratio in the Raman images through superpixel segmentation, we extract biochemically distinct regions within liver tissues, allowing for quantification of characteristic biochemical components such as vitamin A and lipids. Armed with microscopic information about the biochemical composition of the liver tissues, we group tissues having similar composition, providing a descriptor enabling inference of tissue states, contributing valuable information to histological inspection. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
41. Multiple description image compression based on multiwavelets.
- Author
-
Wang, Ning, Ge, Shuangkui, Li, Baobin, and Peng, Lizhong
- Subjects
- *
IMAGE compression , *WAVELETS (Mathematics) , *STATISTICAL correlation , *TRANSFORM coding , *RATE distortion theory - Abstract
Multiple description coding (MDC) is one of the source coding techniques to alleviate the problems of packet loss in the network. The decoder estimates the lost signals from received ones, based on the certain statistical correlation between descriptions. However, this correlation also leads to compression redundancy at the same time. Therefore, how to make efficient use of the introduced correlation has great importance in practical MDC approaches. In this paper, we propose a multiple description image coding scenario based on balanced multiwavelets. Two simple and effective methods to reconstruct the original image from partial descriptions are suggested. Furthermore, optimization criterion corresponding to this multiwavelet based system is provided. According to this criterion, we can choose appropriate multifilter banks to satisfy different demands. Experimental results show that the optimized multifilter banks in a simulated transform coding environment perform very well. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
42. Signaling games in multiple dimensions: Geometric properties of equilibrium solutions.
- Author
-
Kazıklı, Ertan, Gezici, Sinan, and Yüksel, Serdar
- Subjects
- *
NASH equilibrium , *GEOMETRIC analysis , *EQUILIBRIUM , *LITERARY sources , *SYSTEMS design - Abstract
Signaling game problems investigate communication scenarios where encoder(s) and decoder(s) have misaligned objectives due to the fact that they either employ different cost functions or have inconsistent priors. This problem has been studied in the literature for scalar sources under various setups. In this paper, we consider multi-dimensional sources under quadratic criteria in the presence of a bias leading to a mismatch in the criteria, where we show that the generalization from the scalar setup is more than technical. We show that the Nash equilibrium solutions lead to structural richness due to the subtle geometric analysis the problem entails, with consequences in both system design, the presence of linear Nash equilibria, and an information theoretic problem formulation. We first provide a set of geometric conditions that must be satisfied in equilibrium considering any multi-dimensional source. Then, we consider independent and identically distributed sources and characterize necessary and sufficient conditions under which an informative linear Nash equilibrium exists. These conditions involve the bias vector that leads to misaligned costs. Depending on certain conditions related to the bias vector, the existence of linear Nash equilibria requires sources with a Gaussian or a symmetric density. Moreover, in the case of Gaussian sources, our results have a rate–distortion theoretic implication that achievable rates and distortions in the considered game theoretic setup can be obtained from its team theoretic counterpart. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
43. Causal Sampling, Compressing, and Channel Coding of Streaming Data
- Author
-
Guo, Nian
- Subjects
sampling ,Causal code ,rate-distortion theory ,remote estimation ,reliability function ,joint source-channel coding ,streaming ,source coding ,control ,Electrical Engineering - Abstract
With the emergence of the Internet of Things, communication systems, such as those employed in distributed control and tracking scenarios, are becoming increasingly dynamic, interactive, and delay-sensitive. The data in such real-time systems arrive at the encoder progressively in a streaming fashion. An intriguing question is: what codes can transmit streaming data with both high reliability and low latency? Classical non-causal (block) encoding schemes can transmit data reliably but under the assumption that the encoder knows the entire data block before the transmission. While this is a realistic assumption in delay-tolerant systems, it is ill-suited to real-time systems due to the delay introduced by collecting data into a block. This thesis studies causal encoding: the encoder transmits information based on the causally received data while the data is still streaming in and immediately incorporates the newly received data into a continuing transmission on the fly. This thesis investigates causal encoding of streaming data in three scenarios: causal sampling, causal lossy compressing, and causal joint source-channel coding (JSCC). In the causal sampling scenario, a sampler observes a continuous-time source process and causally decides when to transmit real-valued samples of it under a constraint on the average number of samples per second; an estimator uses the causally received samples to approximate the source process in real time. We propose a causal sampling policy that achieves the best tradeoff between the sampling frequency and the end-to-end real-time estimation distortion for a class of continuous Markov processes. In the causal lossy compressing scenario, the sampling frequency constraint in the causal sampling scenario is replaced by a rate constraint on the average number of bits per second. We propose a causal code that achieves the best causal distortion-rate tradeoff for the same class of processes. In the causal JSCC scenario, the noiseless channel and the continuous-time process in the previous scenarios are replaced by a discrete memoryless channel with feedback and a sequence of streaming symbols, respectively. We propose a causal joint sourcechannel code that achieves the maximum exponentially decaying rate of the error probability compatible with a given rate. Remarkably, the fundamental limits in the causal lossy compressing and the causal JSCC scenarios achieved by our causal codes are no worse than those achieved by the best non-causal codes. In addition to deriving the fundamental limits and presenting the causal codes that achieve the limits, we also show that our codes apply to control systems, are resilient to system deficiencies such as channel delay and noise, and have low complexities.
- Published
- 2023
- Full Text
- View/download PDF
44. Information Bottleneck Classification in Extremely Distributed Systems
- Author
-
Denis Ullmann, Shideh Rezaeifar, Olga Taran, Taras Holotyak, Brandon Panos, and Slava Voloshynovskiy
- Subjects
information bottleneck principle ,classification ,deep networks ,decentralized model ,rate-distortion theory ,Science ,Astrophysics ,QB460-466 ,Physics ,QC1-999 - Abstract
We present a new decentralized classification system based on a distributed architecture. This system consists of distributed nodes, each possessing their own datasets and computing modules, along with a centralized server, which provides probes to classification and aggregates the responses of nodes for a final decision. Each node, with access to its own training dataset of a given class, is trained based on an auto-encoder system consisting of a fixed data-independent encoder, a pre-trained quantizer and a class-dependent decoder. Hence, these auto-encoders are highly dependent on the class probability distribution for which the reconstruction distortion is minimized. Alternatively, when an encoding–quantizing–decoding node observes data from different distributions, unseen at training, there is a mismatch, and such a decoding is not optimal, leading to a significant increase of the reconstruction distortion. The final classification is performed at the centralized classifier that votes for the class with the minimum reconstruction distortion. In addition to the system applicability for applications facing big-data communication problems and or requiring private classification, the above distributed scheme creates a theoretical bridge to the information bottleneck principle. The proposed system demonstrates a very promising performance on basic datasets such as MNIST and FasionMNIST.
- Published
- 2020
- Full Text
- View/download PDF
45. A Novel Rate-Distortion Method in 3D Video Capturing in the Context of High Efficiency Video Coding (HEVC) in Intelligent Communications
- Author
-
Stephanakis, Ioannis M., Chochliouros, Ioannis P., Dagiuklas, Anastasios, Anastassopoulos, George C., Papadopoulos, Harris, editor, Andreou, Andreas S., editor, Iliadis, Lazaros, editor, and Maglogiannis, Ilias, editor
- Published
- 2013
- Full Text
- View/download PDF
46. Phase Transitions in Rate Distortion Theory and Deep Learning
- Author
-
Andreas Klotz, Felix Voigtlaender, and Philipp Grohs
- Subjects
Applied Mathematics ,Rate–distortion theory ,Combinatorics ,Sobolev space ,Computational Mathematics ,Function approximation ,Computational Theory and Mathematics ,Lipschitz domain ,Scheme (mathematics) ,Bounded function ,Unit (ring theory) ,Analysis ,Mathematics ,Probability measure - Abstract
Rate distortion theory is concerned with optimally encoding signals from a given signal class $$\mathcal {S}$$ S using a budget of R bits, as $$R \rightarrow \infty $$ R → ∞ . We say that $$\mathcal {S}$$ S can be compressed at rates if we can achieve an error of at most $$\mathcal {O}(R^{-s})$$ O ( R - s ) for encoding the given signal class; the supremal compression rate is denoted by $$s^*(\mathcal {S})$$ s ∗ ( S ) . Given a fixed coding scheme, there usually are some elements of $$\mathcal {S}$$ S that are compressed at a higher rate than $$s^*(\mathcal {S})$$ s ∗ ( S ) by the given coding scheme; in this paper, we study the size of this set of signals. We show that for certain “nice” signal classes $$\mathcal {S}$$ S , a phase transition occurs: We construct a probability measure $$\mathbb {P}$$ P on $$\mathcal {S}$$ S such that for every coding scheme $$\mathcal {C}$$ C and any $$s > s^*(\mathcal {S})$$ s > s ∗ ( S ) , the set of signals encoded with error $$\mathcal {O}(R^{-s})$$ O ( R - s ) by $$\mathcal {C}$$ C forms a $$\mathbb {P}$$ P -null-set. In particular, our results apply to all unit balls in Besov and Sobolev spaces that embed compactly into $$L^2 (\varOmega )$$ L 2 ( Ω ) for a bounded Lipschitz domain $$\varOmega $$ Ω . As an application, we show that several existing sharpness results concerning function approximation using deep neural networks are in fact generically sharp. In addition, we provide quantitative and non-asymptotic bounds on the probability that a random $$f\in \mathcal {S}$$ f ∈ S can be encoded to within accuracy $$\varepsilon $$ ε using R bits. This result is subsequently applied to the problem of approximately representing $$f\in \mathcal {S}$$ f ∈ S to within accuracy $$\varepsilon $$ ε by a (quantized) neural network with at most W nonzero weights. We show that for any $$s > s^*(\mathcal {S})$$ s > s ∗ ( S ) there are constants c, C such that, no matter what kind of “learning” procedure is used to produce such a network, the probability of success is bounded from above by $$\min \big \{1, 2^{C\cdot W \lceil \log _2 (1+W) \rceil ^2 - c\cdot \varepsilon ^{-1/s}} \big \}$$ min { 1 , 2 C · W ⌈ log 2 ( 1 + W ) ⌉ 2 - c · ε - 1 / s } .
- Published
- 2021
47. Polar Codes and Polar Lattices for the Heegard–Berger Problem.
- Author
-
Shi, Jinwen, Liu, Ling, Gunduz, Deniz, and Ling, Cong
- Subjects
- *
LATTICE theory , *RATE distortion theory , *GAUSSIAN channels , *WHITE noise theory , *MARKOV processes - Abstract
Explicit coding schemes are proposed to achieve the rate-distortion function of the Heegard–Berger problem using polar codes. Specifically, a nested polar code construction is employed to achieve the rate-distortion function for doubly symmetric binary sources when the side information may be absent. The nested structure contains two optimal polar codes for lossy source coding and channel coding, respectively. Moreover, a similar nested polar lattice construction is employed when the source and the side information are jointly Gaussian. The proposed polar lattice is constructed by nesting a quantization polar lattice and a capacity-achieving polar lattice for the additive white Gaussian noise channel. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
48. Lossy Coding of Correlated Sources Over a Multiple Access Channel: Necessary Conditions and Separation Results.
- Author
-
Guler, Basak, Gunduz, Deniz, and Yener, Aylin
- Subjects
- *
GAUSSIAN processes , *CHANNEL coding , *RATE distortion theory , *ELECTRIC distortion , *ENCODING - Abstract
Lossy coding of correlated sources over a multiple access channel (MAC) is studied. First, a joint source-channel coding scheme is presented when the decoder has correlated side information. Next, the optimality of separate source and channel coding that emerges from the availability of a common observation at the encoders or side information at the encoders and the decoder is investigated. It is shown that separation is optimal when the encoders have access to a common observation whose lossless recovery is required at the decoder, and the two sources are independent conditioned on this common observation. Optimality of separation is also proved when the encoder and the decoder have access to shared side information conditioned on which the two sources are independent. These separation results obtained in the presence of side information are then utilized to provide a set of necessary conditions for the transmission of correlated sources over a MAC without side information. Finally, by specializing the obtained necessary conditions to the transmission of binary and Gaussian sources over a MAC, it is shown that they can potentially be tighter than the existing results in the literature, providing a novel converse for this fundamental problem. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
49. Rate-Distortion Theory of Finite Point Processes.
- Author
-
Koliander, Gunther, Schuhmacher, Dominic, and Hlawatsch, Franz
- Subjects
- *
RATE distortion theory , *CODING theory , *INFORMATION theory , *DATA compression , *POINT processes , *MATHEMATICAL bounds - Abstract
We study the compression of data in the case where the useful information is contained in a set rather than a vector, i.e., the ordering of the data points is irrelevant and the number of data points is unknown. Our analysis is based on rate-distortion theory and the theory of finite point processes. We introduce fundamental information-theoretic concepts and quantities for point processes and present general lower and upper bounds on the rate-distortion function. To enable a comparison with the vector setting, we concretize our bounds for point processes of fixed cardinality. In particular, we analyze a fixed number of unordered Gaussian data points and show that we can significantly reduce the required rates compared to the best possible compression strategy for Gaussian vectors. As an example of point processes with variable cardinality, we study the best possible compression of Poisson point processes. For the specific case of a Poisson point process with uniform intensity on the unit square, our lower and upper bounds are separated by only a small gap and thus provide a good characterization of the rate-distortion function. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
50. Coded Caching and Content Delivery With Heterogeneous Distortion Requirements.
- Author
-
Yang, Qianqian and Gunduz, Deniz
- Subjects
- *
CACHE memory , *LOSSY data compression , *RATE distortion theory , *STREAMING video & television , *CONTENT delivery networks - Abstract
Cache-aided coded content delivery is studied for devices with diverse quality-of-service requirements, specified by a different average distortion target. The network consists of a server holding a database of independent contents, and users equipped with local caches of different capacities. User caches are filled by the server during a low traffic period without the knowledge of particular user demands. As opposed to the current literature, which assumes that the users request files in their entirety, it is assumed that the users in the system have distinct distortion requirements; and therefore, each user requests a single file from the database to be served at a different distortion level. Our goal in this paper is to characterize the minimum delivery rate the server needs to transmit over an error-free shared link to satisfy all possible demand combinations at the requested distortion levels, considering both centralized and decentralized cache placement. For centralized cache placement, the optimal delivery rate is characterized for the two-file two-user scenario for any pair of target distortion requirements, when the underlying source distribution is successively refinable. For the two-user scenario with more than two successively refinable files, the optimal scheme is characterized when the cache capacities of the users are the same and the number of files is a multiple of 3. For the general source distribution, not necessarily successively refinable, and with arbitrary number of users and files, a layered caching and delivery scheme is proposed, assuming that scalable source coding is employed at the server. This allows dividing the problem into two subproblems: the lossless caching of each layer with heterogeneous cache sizes, and cache allocation among layers. A delivery rate minimization problem is formulated and solved numerically for each layer; while two different schemes are proposed to allocate user caches among layers, namely, proportional cache allocation and ordered cache allocation. A decentralized lossy coded caching scheme is also proposed, and its delivery rate performance is studied. Simulation results validate the effectiveness of the proposed schemes in both settings. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.