41 results on '"R Varshney"'
Search Results
2. Explaining Artificial Intelligence Generation and Creativity: Human interpretability for novel ideas and artifacts
- Author
-
Payel Das and Lav R. Varshney
- Subjects
Applied Mathematics ,Signal Processing ,Electrical and Electronic Engineering - Published
- 2022
3. Expected Extinction Times of Epidemics With State-Dependent Infectiousness
- Author
-
Akhil Bhimaraju, Avhishek Chatterjee, and Lav R. Varshney
- Subjects
Social and Information Networks (cs.SI) ,FOS: Computer and information sciences ,Computer Networks and Communications ,Control and Systems Engineering ,Quantitative Biology::Populations and Evolution ,Computer Science - Social and Information Networks ,Quantitative Biology::Other ,Computer Science Applications - Abstract
We model an epidemic where the per-person infectiousness in a network of geographic localities changes with the total number of active cases. This would happen as people adopt more stringent non-pharmaceutical precautions when the population has a larger number of active cases. We show that there exists a sharp threshold such that when the curing rate for the infection is above this threshold, the mean time for the epidemic to die out is logarithmic in the initial infection size, whereas when the curing rate is below this threshold, the mean time for epidemic extinction is infinite. We also show that when the per-person infectiousness goes to zero asymptotically as a function of the number of active cases, the mean extinction times all have the same asymptote independent of network structure. Simulations bear out these results, while also demonstrating that if the per-person infectiousness is large when the epidemic size is small (i.e., the precautions are lax when the epidemic is small and only get stringent after the epidemic has become large), it might take a very long time for the epidemic to die out. We also provide some analytical insight into these observations., To appear in IEEE Transactions on Network Science and Engineering
- Published
- 2022
4. (Re)discovering Laws of Music Theory Using Information Lattice Learning
- Author
-
Haizi Yu, Lav R. Varshney, Heinrich Taube, and James A. Evans
- Published
- 2022
5. Coding for Scalable Blockchains via Dynamic Distributed Storage
- Author
-
Lav R. Varshney and Ravi Kiran Raman
- Subjects
Computer Networks and Communications ,Computer science ,Distributed computing ,Distributed data store ,Scalability ,Electrical and Electronic Engineering ,Software ,Computer Science Applications ,Coding (social sciences) - Published
- 2021
6. Guest Editoral Signal Processing Advances in Wireless Transmission of Information and Power
- Author
-
Kaibin Huang, Bruno Clerckx, Lav R. Varshney, Sennur Ulukus, and Mohamed-Slim Alouini
- Subjects
Signal processing ,business.industry ,Computer science ,Emphasis (telecommunications) ,Information technology ,Signal Processing ,Wireless ,Maximum power transfer theorem ,Wireless power transfer ,Electrical and Electronic Engineering ,business ,Telecommunications ,Energy harvesting ,Wireless sensor network - Abstract
The papers in this special section focuses on signal processing advances in wireless transmission of power and information. Wireless power transfer (WPT) and wireless information and power transfer (WIPT) have received growing attention in the research community in the past few years. In this special issue, a total of fourteen papers present state-of-the-art results in the broad area of wireless transmission of information and power with a special emphasis on signal processing advances. The special issue starts with a guest editor-authored tutorial overview paper that reviews the signal processing, machine learning, sensing, and computing techniques, challenges and opportunities in future networks based on WPT and WIPT.
- Published
- 2021
7. The CEO Problem With rth Power of Difference and Logarithmic Distortions
- Author
-
Lav R. Varshney and Daewon Seo
- Subjects
Logarithm ,Gaussian ,Estimator ,020206 networking & telecommunications ,02 engineering and technology ,Library and Information Sciences ,Upper and lower bounds ,Noise (electronics) ,Computer Science Applications ,Distortion (mathematics) ,Combinatorics ,Entropy (classical thermodynamics) ,symbols.namesake ,Quadratic equation ,0202 electrical engineering, electronic engineering, information engineering ,symbols ,Information Systems ,Mathematics - Abstract
The CEO problem has received much attention since first introduced by Berger et al ., but there are limited results on non-Gaussian models with non-quadratic distortion measures. In this work, we extend the quadratic Gaussian CEO problem to two non-Gaussian settings with general $r$ th power of difference distortion. Assuming an identical observation channel across agents, we study the asymptotics of distortion decay as the number of agents and sum-rate, $R_{\textsf {sum}}$ , grow without bound, while individual rates vanish. The first setting is a regular source-observation model with $r$ th power of difference distortion, which subsumes the quadratic Gaussian CEO problem, and we establish that the distortion decays at $\mathcal {O}(R_{\textsf {sum}}^{-r/2})$ when $r \ge 2$ . We use sample median estimation after the Berger-Tung scheme for achievability. The other setting is a non-regular source-observation model, including uniform additive noise models, with $r$ th power of difference distortion for which estimation-theoretic regularity conditions do not hold. The distortion decay $\mathcal {O}(R_{\textsf {sum}}^{-r})$ when $r \ge 1$ is obtained for the non-regular model by midrange estimator following the Berger-Tung scheme. We also provide converses based on the Shannon lower bound for the regular model and the Chazan-Zakai-Ziv bound for the non-regular model, respectively. Lastly, we provide a sufficient condition for the regular model, under which quadratic and logarithmic distortions are asymptotically equivalent by an entropy power relationship as the number of agents grows. This proof relies on the Bernstein-von Mises theorem.
- Published
- 2021
8. Optimal Recovery of Missing Values for Non-Negative Matrix Factorization
- Author
-
Rebecca Chen Dean and Lav R. Varshney
- Subjects
Biological data ,Statistics::Applications ,error bound ,Probabilistic logic ,Similarity measure ,Minimax ,Missing data ,Local structure ,Quantitative Biology::Genomics ,Upper and lower bounds ,Clustering ,TK1-9971 ,Non-negative matrix factorization ,Matrix decomposition ,missing values ,non-negative matrix factorization ,Approximation error ,Data_GENERAL ,Signal Processing ,Statistics::Methodology ,Electrical engineering. Electronics. Nuclear engineering ,Imputation (statistics) ,Algorithm ,Mathematics - Abstract
Missing values imputation is often evaluated on some similarity measure between actual and imputed data. However, it may be more meaningful to evaluate downstream algorithm performance after imputation than the imputation itself. We describe a straightforward unsupervised imputation algorithm, a minimax approach based on optimal recovery, and derive probabilistic error bounds on downstream non-negative matrix factorization (NMF). Under certain geometric conditions, we prove upper bounds on NMF relative error, which is the first bound of this type for missing values. We also give probabilistic bounds for the same geometric assumptions. Experiments on image data and biological data show that this theoretically-grounded technique performs as well as or better than other imputation techniques that account for local structure. We also comment on imputation fairness.
- Published
- 2021
9. Decision Making in Star Networks With Incorrect Beliefs
- Author
-
Lav R. Varshney, Daewon Seo, and Ravi Kiran Raman
- Subjects
Signal Processing (eess.SP) ,FOS: Computer and information sciences ,Mathematical optimization ,Cumulative prospect theory ,Stochastic process ,Computer science ,Computer Science - Information Theory ,Information Theory (cs.IT) ,Bayesian probability ,020206 networking & telecommunications ,02 engineering and technology ,Computer Science::Multiagent Systems ,symbols.namesake ,Bayes' theorem ,Asymptotically optimal algorithm ,Gaussian noise ,Signal Processing ,Prior probability ,FOS: Electrical engineering, electronic engineering, information engineering ,0202 electrical engineering, electronic engineering, information engineering ,Social decision making ,symbols ,Electrical Engineering and Systems Science - Signal Processing ,Electrical and Electronic Engineering - Abstract
Consider a Bayesian binary decision-making problem in star networks, where local agents make selfish decisions independently, and a fusion agent makes a final decision based on aggregated decisions and its own private signal. In particular, we assume all agents have private beliefs for the true prior probability, based on which they perform Bayesian decision making. We focus on the Bayes risk of the fusion agent and counterintuitively find that incorrect beliefs could achieve a smaller risk than that when agents know the true prior. It is of independent interest for sociotechnical system design that the optimal beliefs of local agents resemble human probability reweighting models from cumulative prospect theory. We also consider asymptotic characterization of the optimal beliefs and fusion agent's risk in the number of local agents. We find that the optimal risk of the fusion agent converges to zero exponentially fast as the number of local agents grows. Furthermore, having an identical constant belief is asymptotically optimal in the sense of the risk exponent. For additive Gaussian noise, the optimal belief turns out to be a simple function of only error costs and the risk exponent can be explicitly characterized., Comment: final version, to appear in IEEE Transactions on Signal Processing
- Published
- 2021
10. The Bee-Identification Error Exponent With Absentee Bees
- Author
-
Lav R. Varshney, Vincent Y. F. Tan, and Anshoo Tandon
- Subjects
Contrast (statistics) ,020206 networking & telecommunications ,02 engineering and technology ,Library and Information Sciences ,Barcode ,Upper and lower bounds ,Electronic mail ,Computer Science Applications ,Image (mathematics) ,law.invention ,law ,0202 electrical engineering, electronic engineering, information engineering ,Exponent ,Fraction (mathematics) ,Algorithm ,Decoding methods ,Information Systems ,Mathematics - Abstract
The “bee-identification problem” was formally defined by Tandon, Tan and Varshney [ IEEE Trans. Commun. , vol. 67, 2019], and the error exponent was studied. This work extends the results for the “absentee bees” scenario, where a fraction of the bees are absent in the beehive image used for identification. For this setting, we present an exact characterization of the bee-identification error exponent, and show that independent barcode decoding is optimal, i.e., joint decoding of the bee barcodes does not result in a better error exponent relative to independent decoding of each noisy barcode. This is in contrast to the result without absentee bees, where joint barcode decoding results in a significantly higher error exponent than independent barcode decoding. We also define and characterize the ‘capacity’ for the bee-identification problem with absentee bees, and prove a strong converse for the same.
- Published
- 2020
11. Energy-Reliability Limits in Nanoscale Feedforward Neural Networks and Formulas
- Author
-
Avhishek Chatterjee and Lav R. Varshney
- Subjects
FOS: Computer and information sciences ,Computer science ,Computer Science - Information Theory ,Information Theory (cs.IT) ,Circuit design ,Energy consumption ,Computer Science::Hardware Architecture ,Computer Science::Emerging Technologies ,Logic gate ,Hardware_INTEGRATEDCIRCUITS ,Electronic engineering ,Feedforward neural network ,Electronics ,Circuit complexity ,Time complexity ,Hardware_LOGICDESIGN ,Electronic circuit - Abstract
Due to energy-efficiency requirements, computational systems are now being implemented using noisy nanoscale semiconductor devices whose reliability depends on energy consumed. We study circuit-level energy-reliability limits for deep feedforward neural networks (multilayer perceptrons) built using such devices, and en route also establish the same limits for formulas (boolean tree-structured circuits). To obtain energy lower bounds, we extend Pippenger's mutual information propagation technique for characterizing the complexity of noisy circuits, since small circuit complexity need not imply low energy. Many device technologies require all gates to have the same electrical operating point; in circuits of such uniform gates, we show that the minimum energy required to achieve any non-trivial reliability scales superlinearly with the number of inputs. Circuits implemented in emerging device technologies like spin electronics can, however, have gates operate at different electrical points; in circuits of such heterogeneous gates, we show energy scaling can be linear in the number of inputs. Building on our extended mutual information propagation technique and using crucial insights from convex optimization theory, we develop an algorithm to compute energy lower bounds for any given boolean tree under heterogeneous gates. This algorithm runs in linear time in number of gates, and is therefore practical for modern circuit design. As part of our development we find a simple procedure for energy allocation across circuit gates with different operating points and neural networks with differently-operating layers., Comment: To appear in IEEE Journal on Selected Areas in Information Theory (special issue on deep learning)
- Published
- 2020
12. Cost-Reliability Tradeoffs in Fusing Unreliable Computational Units
- Author
-
Mehmet A. Donmez, Maxim Raginsky, Andrew C. Singer, and Lav R. Varshney
- Subjects
fusion ,unreliable computation ,Computational neuroscience ,redundancy ,Computer science ,media_common.quotation_subject ,Fidelity ,Cost-reliability tradeoff ,in-sensor computing ,Reliability engineering ,fidelity ,Fuse (electrical) ,Lower cost ,lcsh:Electrical engineering. Electronics. Nuclear engineering ,Convex function ,lcsh:TK1-9971 ,media_common - Abstract
We investigate fusing several unreliable computational units that perform the same task. We model an unreliable computational outcome as an additive perturbation to its error-free result in terms of its fidelity and cost. We analyze reliability of replication-based strategies that distribute cost across several unreliable units and fuse their outcomes. When the cost is a convex function of fidelity, the optimal replication-based strategy in terms of incurred cost while achieving a target mean-square error level may fuse several unreliable computational units. For concave and linear costs, a single more reliable unit incurs lower cost compared to fusion of several lower cost and less reliable units while achieving the same mean-square error level. We show how our results give insight into problems from theoretical neuroscience and circuits.
- Published
- 2020
13. On Multiple-Access in Queue-Length Sensitive Systems
- Author
-
Avhishek Chatterjee, Daewon Seo, and Lav R. Varshney
- Subjects
FOS: Computer and information sciences ,Computer science ,Computer Science - Information Theory ,Information Theory (cs.IT) ,Poisson distribution ,lcsh:HE1-9990 ,Upper and lower bounds ,Point process ,lcsh:Telecommunication ,Computer Science::Performance ,multiple-access channel ,Channel capacity ,symbols.namesake ,Quality of service ,Transmission (telecommunications) ,lcsh:TK5101-6720 ,Poisson point process ,Computer Science::Networking and Internet Architecture ,symbols ,lcsh:Transportation and communications ,Algorithm ,Queue ,Computer Science::Information Theory ,Communication channel - Abstract
We consider transmission of packets over queue-length sensitive unreliable links, where packets are randomly corrupted through a noisy channel whose transition probabilities are modulated by the queue-length. The goal is to characterize the capacity of this channel. We particularly consider multiple-access systems, where transmitters dispatch encoded symbols over a system that is a superposition of continuous-time $GI_k/GI/1$ queues. A server receives and processes symbols in order of arrivals with queue-length dependent noise. We first determine the capacity of single-user queue-length dependent channels. Further, we characterize the best and worst dispatch processes for $GI/M/1$ queues and the best and worst service processes for $M/GI/1$ queues. Then, the multiple-access channel capacity is obtained using point processes. When the number of transmitters is large and each arrival process is sparse, the superposition of arrivals approaches a Poisson point process. In characterizing the Poisson approximation, we show that the capacity of the multiple-access system converges to that of a single-user $M/GI/1$ queue-length dependent system, and an upper bound on the convergence rate is obtained. This implies that the best and worst server behaviors of single-user $M/GI/1$ queues are preserved in the sparse multiple-access case., Comment: final version, to appear in IEEE Open Journal of the Communications Society
- Published
- 2020
14. Are Run-Length Limited Codes Suitable for Simultaneous Energy and Information Transfer?
- Author
-
Mehul Motani, Lav R. Varshney, and Anshoo Tandon
- Subjects
Information transfer ,Computer Networks and Communications ,Renewable Energy, Sustainability and the Environment ,Computer science ,Sliding window protocol ,Optical recording ,Visible light communication ,Signal ,Algorithm ,Decoding methods ,Run-length limited ,Energy (signal processing) - Abstract
Run-length limited (RLL) codes are a well-studied class of constrained codes having application in diverse areas, such as optical and magnetic data recording systems, DNA-based storage, and visible light communication. RLL codes have also been proposed for the emerging area of simultaneous energy and information transfer, where the receiver uses the received signal for decoding information as well as for harvesting energy to run its circuitry. In this paper, we show that RLL codes are not the best codes for simultaneous energy and information transfer, in terms of the maximum number of codewords which avoid energy outage, i.e., outage-constrained capacity. Specifically, we show that sliding window constrained (SWC) codes and sub-block energy constrained (SEC) codes have significantly higher outage-constrained capacities than RLL codes for moderate to large energy buffer sizes.
- Published
- 2019
15. Signal Processing Foundations for Time-Based Signal Representations: Neurobiological Parallels to Engineered Systems Designed for Energy Efficiency or Hardware Simplicity
- Author
-
Andrew C. Singer, Noyan C. Sevuktekin, Pavan Kumar Hanumolu, and Lav R. Varshney
- Subjects
Signal processing ,business.industry ,Applied Mathematics ,media_common.quotation_subject ,Information processing ,020206 networking & telecommunications ,Neuroinformatics ,02 engineering and technology ,Signal ,Signal Processing ,0202 electrical engineering, electronic engineering, information engineering ,Waveform ,Simplicity ,Electrical and Electronic Engineering ,business ,Computer hardware ,Pulse-width modulation ,Efficient energy use ,media_common - Abstract
Neurobiological systems operate at power levels that are unattainable by modern electronic systems while exhibiting broader information processing capabilities for a number of important tasks. A variety of engineered systems designed for energy efficiency or hardware simplicity use time-based signal representations, which share similar mathematical principles with those that arise naturally in biology. In general, time-based signal representations refer to embedding information into the timing, density, or duration of a predetermined, and often bipolar, waveform. In mammalian nervous systems, it is generally accepted that neurons embed information into the timing and firing density of the sudden changes in their membrane potential, or spikes. Similarly, many low-power electronic systems use signal representations that embed information in the timing, repetition frequency, or duration of simple pulse waveforms. Despite their apparent similarities, such signal representations are often studied in different contexts.
- Published
- 2019
16. The Bee-Identification Problem: Bounds on the Error Exponent
- Author
-
Lav R. Varshney, Anshoo Tandon, and Vincent Y. F. Tan
- Subjects
Discrete mathematics ,Zero (complex analysis) ,020206 networking & telecommunications ,02 engineering and technology ,Barcode ,Upper and lower bounds ,law.invention ,Parameter identification problem ,Permutation ,law ,0202 electrical engineering, electronic engineering, information engineering ,Code (cryptography) ,Exponent ,Electrical and Electronic Engineering ,Decoding methods ,Mathematics - Abstract
Consider the problem of identifying a massive number of bees, uniquely labeled with barcodes, using noisy measurements. We formally introduce this “bee-identification problem”, define its error exponent, and derive efficiently computable upper and lower bounds for this exponent. We show that joint decoding of barcodes provides a significantly better exponent compared to separate decoding followed by permutation inference. For low rates, we prove that the lower bound on the bee-identification exponent obtained using typical random codes (TRC) is strictly better than the corresponding bound obtained using a random code ensemble (RCE). Further, as the rate approaches zero, we prove that the upper bound on the bee-identification exponent meets the lower bound obtained using TRC with joint barcode decoding.
- Published
- 2019
17. Beliefs in Decision-Making Cascades
- Author
-
Ravi Kiran Raman, Vivek K Goyal, Lav R. Varshney, Joong Bum Rhim, and Daewon Seo
- Subjects
FOS: Computer and information sciences ,Cumulative prospect theory ,Computer science ,Machine Learning (stat.ML) ,020206 networking & telecommunications ,02 engineering and technology ,Social learning ,Social planner ,Bayes' theorem ,Computer Science - Computer Science and Game Theory ,Statistics - Machine Learning ,Signal Processing ,0202 electrical engineering, electronic engineering, information engineering ,Selection (linguistics) ,Electrical and Electronic Engineering ,Set (psychology) ,Mathematical economics ,Computer Science and Game Theory (cs.GT) ,Statistical hypothesis testing - Abstract
This work explores a social learning problem with agents having nonidentical noise variances and mismatched beliefs. We consider an $N$-agent binary hypothesis test in which each agent sequentially makes a decision based not only on a private observation, but also on preceding agents' decisions. In addition, the agents have their own beliefs instead of the true prior, and have nonidentical noise variances in the private signal. We focus on the Bayes risk of the last agent, where preceding agents are selfish. We first derive the optimal decision rule by recursive belief update and conclude, counterintuitively, that beliefs deviating from the true prior could be optimal in this setting. The effect of nonidentical noise levels in the two-agent case is also considered and analytical properties of the optimal belief curves are given. Next, we consider a predecessor selection problem wherein the subsequent agent of a certain belief chooses a predecessor from a set of candidates with varying beliefs. We characterize the decision region for choosing such a predecessor and argue that a subsequent agent with beliefs varying from the true prior often ends up selecting a suboptimal predecessor, indicating the need for a social planner. Lastly, we discuss an augmented intelligence design problem that uses a model of human behavior from cumulative prospect theory and investigate its near-optimality and suboptimality., final version, to appear in IEEE Transactions on Signal Processing
- Published
- 2019
18. Think Your Artificial Intelligence Software Is Fair? Think Again
- Author
-
Samuel C. Hoffman, Kalapriya Kannan, Pranay Lohia, Kush R. Varshney, Stephanie Houde, Kuntal Dey, Moninder Singh, Diptikalyan Saha, Aleksandra Mojsilovic, John T. Richards, Sameep Mehta, Michael Hind, Yunfeng Zhang, Karthikeyan Natesan Ramamurthy, Prasanna Sattigeri, Seema Nagar, and Rachel K. E. Bellamy
- Subjects
Software ,business.industry ,Computer science ,0202 electrical engineering, electronic engineering, information engineering ,Stakeholder ,020207 software engineering ,Artificial intelligence software ,02 engineering and technology ,Resolution (logic) ,business ,Data science - Abstract
Today, machine-learning software is used to help make decisions that affect people's lives. Some people believe that the application of such software results in fairer decisions because, unlike humans, machine-learning software generates models that are not biased. Think again. Machine-learning software is also biased, sometimes in similar ways to humans, often in different ways. While fair model- assisted decision making involves more than the application of unbiased models-consideration of application context, specifics of the decisions being made, resolution of conflicting stakeholder viewpoints, and so forth-mitigating bias from machine-learning software is important and possible but difficult and too often ignored.
- Published
- 2019
19. Information and Energy Transmission With Experimentally Sampled Harvesting Functions
- Author
-
Lav R. Varshney and Daewon Seo
- Subjects
020208 electrical & electronic engineering ,020206 networking & telecommunications ,02 engineering and technology ,Function (mathematics) ,Information theory ,Topology ,Noise (electronics) ,Function approximation ,Distortion ,0202 electrical engineering, electronic engineering, information engineering ,Electrical and Electronic Engineering ,Energy harvesting ,Energy (signal processing) ,Mathematics ,Communication channel - Abstract
This paper considers the problem of simultaneous information and energy transmission, where the energy harvesting function is only known experimentally at sample points, e.g., due to nonlinearities and parameter uncertainties in harvesting circuits. We investigate the performance loss due to this partial knowledge of the harvesting function in terms of transmitted energy and information. In particular, we assume that the harvesting function is a subclass of the Sobolev space and consider two cases, where the experimental samples are either taken noiselessly or in the presence of noise. Using constructive function approximation and regression methods for noiseless and noisy samples, respectively, we show that the worst loss in energy transmission vanishes asymptotically as the number of samples increase. Similarly, the loss in information rate vanishes in the interior of the energy domain; however, it does not always vanish at maximal energy. We further show that the same principle applies in multicast settings, such as medium access in the Wi-Fi protocol. We also consider the end-to-end source–channel communication problem under source distortion constraint and channel energy requirement, where both distortion and harvesting functions are known only at samples.
- Published
- 2019
20. Must Surprise Trump Information?
- Author
-
Lav R. Varshney
- Subjects
business.industry ,media_common.quotation_subject ,General Engineering ,Media studies ,General Social Sciences ,Information technology ,02 engineering and technology ,Democracy ,Surprise ,020204 information systems ,Political science ,Service (economics) ,0202 electrical engineering, electronic engineering, information engineering ,Coming out ,The Internet ,business ,media_common - Abstract
In a recent essay on the role of modern information technologies in democratic processes, Zeynep Tufekci described the actions of Egyptian leader Hosni Mubarak in cutting off Internet and cellular service during the 2011 Tahrir uprising as follows: "The move backfired: it restricted the flow of information coming out of Tahrir Square but caused international attention on Egypt to spike. He hadn't understood that in the 21st century it is the flow of attention, not information (which we already have too much of), that matters."
- Published
- 2019
21. Efficient Local Secret Sharing for Distributed Blockchain Systems
- Author
-
Young-Sik Kim, Lav R. Varshney, Naresh R. Shanbhag, Ravi Kiran Raman, and Yongjune Kim
- Subjects
Blockchain ,Distributed database ,business.industry ,Computer science ,020206 networking & telecommunications ,Denial-of-service attack ,02 engineering and technology ,Encryption ,Secret sharing ,Computer Science Applications ,Public-key cryptography ,Modeling and Simulation ,Distributed data store ,0202 electrical engineering, electronic engineering, information engineering ,Electrical and Electronic Engineering ,business ,Computer network - Abstract
Blockchain systems store transaction data in the form of a distributed ledger where each peer is to maintain an identical copy. Blockchain systems resemble repetition codes, incurring high storage cost. Recently, distributed storage blockchain (DSB) systems have been proposed to improve storage efficiency by incorporating secret sharing, private key encryption, and information dispersal algorithms. However, the DSB results in significant communication cost when peer failures occur due to denial of service attacks. In this letter, we propose a new DSB approach based on a local secret sharing (LSS) scheme with a hierarchical secret structure of one global secret and several local secrets. The proposed DSB approach with LSS improves the storage and recovery communication costs.
- Published
- 2019
22. Decision Making With Quantized Priors Leads to Discrimination
- Author
-
Kush R. Varshney and Lav R. Varshney
- Subjects
education.field_of_study ,Training set ,business.industry ,Population ,Decision rule ,Machine learning ,computer.software_genre ,Bounded rationality ,Quantization (physics) ,Political science ,Prior probability ,Econometrics ,Detection theory ,Artificial intelligence ,Electrical and Electronic Engineering ,education ,business ,computer ,Expected utility hypothesis - Abstract
Racial discrimination in decision-making scenarios such as police arrests appears to be a violation of expected utility theory. Drawing on results from the science of information, we discuss an information-based model of signal detection over a population that generates such behavior as an alternative explanation to taste-based discrimination by the decision maker or differences among the racial populations. This model uses the decision rule that maximizes expected utility-the likelihood ratio test-but constrains the precision of the threshold to a small discrete set. The precision constraint follows from both bounded rationality in human recollection and finite training data for estimating priors. When combined with social aspects of human decision making and precautionary cost settings, the model predicts the own-race bias that has been observed in several econometric studies.
- Published
- 2017
23. Data Pre-Processing for Discrimination Prevention: Information-Theoretic Optimization and Analysis
- Author
-
Flavio P. Calmon, Bhanukiran Vinzamuri, Dennis Wei, Kush R. Varshney, and Karthikeyan Natesan Ramamurthy
- Subjects
Computer science ,Supervised learning ,Probabilistic logic ,Data transformation (statistics) ,02 engineering and technology ,010501 environmental sciences ,Information theory ,computer.software_genre ,01 natural sciences ,Sample size determination ,020204 information systems ,Distortion ,Signal Processing ,Convex optimization ,0202 electrical engineering, electronic engineering, information engineering ,Data pre-processing ,Data mining ,Electrical and Electronic Engineering ,computer ,0105 earth and related environmental sciences - Abstract
Non-discrimination is a recognized objective in algorithmic decision making. In this paper, we introduce a novel probabilistic formulation of data pre-processing for reducing discrimination. We propose a convex optimization for learning a data transformation with three goals: controlling group discrimination, limiting distortion in individual data samples, and preserving utility. Several theoretical properties are established, including conditions for convexity, a characterization of the impact of limited sample size on discrimination and utility guarantees, and a connection between discrimination and estimation. Two instances of the proposed optimization are applied to datasets, including one on real-world criminal recidivism. Results show that discrimination can be greatly reduced at a small cost in classification accuracy and with precise control of individual distortion.
- Published
- 2018
24. Universal Joint Image Clustering and Registration Using Multivariate Information Measures
- Author
-
Lav R. Varshney and Ravi Kiran Raman
- Subjects
Pixel ,Computer science ,business.industry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Image registration ,020206 networking & telecommunications ,Pattern recognition ,02 engineering and technology ,Mutual information ,Image (mathematics) ,Asymptotically optimal algorithm ,Computer Science::Computer Vision and Pattern Recognition ,Signal Processing ,0202 electrical engineering, electronic engineering, information engineering ,Unsupervised learning ,020201 artificial intelligence & image processing ,Pairwise comparison ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,Cluster analysis - Abstract
We consider the problem of universal joint clustering and registration of images. Image clustering focuses on grouping similar images, while image registration refers to the task of aligning copies of an image that have been subject to rigid-body transformations, such as rotations and translations. We first study registering two images using maximum mutual information and prove its asymptotic optimality. We then show the shortcomings of pairwise registration in multi-image registration, and design an asymptotically optimal algorithm based on multi-information. Further, we define a novel multivariate information functional to perform joint clustering and registration of images, and prove consistency of the algorithm. Finally, we consider registration and clustering of numerous limited-resolution images, defining algorithms that are order-optimal in scaling of number of pixels in each image with the number of images.
- Published
- 2018
25. Experiments and Models for Decision Fusion by Humans in Inference Networks
- Author
-
Aditya Vempaty, Pramod K. Varshney, Amy H. Criss, Gregory J. Koop, and Lav R. Varshney
- Subjects
education.field_of_study ,Sociotechnical system ,business.industry ,05 social sciences ,Population ,Inference ,020206 networking & telecommunications ,02 engineering and technology ,Machine learning ,computer.software_genre ,050105 experimental psychology ,Hierarchical database model ,Electronic mail ,Data modeling ,Signal Processing ,0202 electrical engineering, electronic engineering, information engineering ,0501 psychology and cognitive sciences ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,education ,Wireless sensor network ,computer ,Optimal decision - Abstract
With the advent of the Internet of Things (IoT) and a rapid deployment of smart devices and wireless sensor networks (WSNs), humans interact extensively with machine data. These human decision makers use sensors that provide information through a sociotechnical network. The sensors can be other human users or they can be IoT devices. The decision makers themselves are also part of the network, and there is a need to understand how they will behave. In this paper, the decision fusion behavior of humans is analyzed on the basis of behavioral experiments. The data collected from these experiments demonstrate that people perform decision fusion in a stochastic manner dependent on various factors, unlike machines that perform this task in a deterministic manner. A Bayesian hierarchical model is developed to characterize the observed stochastic human behavior. This hierarchical model captures the differences observed in people at individual, crowd, and population levels. The implications of such a model on designing large-scale inference systems are presented by developing optimal decision fusion trees with both human and machine agents.
- Published
- 2018
26. Work Capacity of Regulated Freelance Platforms: Fundamental Limits and Decentralized Schemes
- Author
-
Sriram Vishwanath, Avhishek Chatterjee, and Lav R. Varshney
- Subjects
Scheme (programming language) ,0209 industrial biotechnology ,Queueing theory ,Operations research ,Computer Networks and Communications ,business.industry ,Computer science ,media_common.quotation_subject ,Software development ,020207 software engineering ,Cloud computing ,02 engineering and technology ,Crowdsourcing ,Computer Science Applications ,020901 industrial engineering & automation ,0202 electrical engineering, electronic engineering, information engineering ,Resource management ,Quality (business) ,Electrical and Electronic Engineering ,business ,computer ,Software ,computer.programming_language ,media_common - Abstract
Crowdsourcing of jobs to online freelance platforms is rapidly gaining popularity. Most crowdsourcing platforms are uncontrolled and offer freedom to customers and freelancers to choose each other. This works well for unskilled jobs (e.g., image classification) with no specific quality requirement since freelancers are functionally identical. For skilled jobs (e.g., software development) with specific quality requirements, however, this does not ensure that the maximum number of job requests is satisfied. In this paper, we determine the capacity of regulated freelance systems, in terms of maximum satisfied job requests, and propose centralized schemes that achieve capacity. To ensure decentralized operation and freedom for customers and freelancers, we propose simple schemes compatible with the operation of current crowdsourcing platforms that approximately achieve capacity. Furthermore, for settings where the number of job requests exceeds capacity, we propose a scheme that is agnostic of that information, but is optimal and fair in declining jobs without wait.
- Published
- 2017
27. Malleable Coding for Updatable Cloud Caching
- Author
-
Lav R. Varshney, Vivek K Goyal, and Julius Kusuma
- Subjects
Computer science ,business.industry ,Distributed computing ,Code word ,020206 networking & telecommunications ,Provisioning ,Cloud computing ,Data_CODINGANDINFORMATIONTHEORY ,02 engineering and technology ,Reuse ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,Cache ,Electrical and Electronic Engineering ,business ,Data compression ,Coding (social sciences) - Abstract
In software-as-a-service applications provisioned through cloud computing, locally cached data are often modified with updates from new versions. In some cases, with each edit, one may want to preserve both the original and new versions. In this paper, we focus on cases in which only the latest version must be preserved. Furthermore, it is desirable for the data to not only be compressed but to also be easily modified during updates, since representing information and modifying the representation both incur cost. We examine whether it is possible to have both compression efficiency and ease of alteration, in order to promote codeword reuse. In other words, we study the feasibility of a malleable and efficient coding scheme. The tradeoff between compression efficiency and malleability cost—the difficulty of synchronizing compressed versions—is measured as the length of a reused prefix portion. The region of achievable rates and malleability is found. Drawing from prior work on common information problems, we show that efficient data compression may not be the best engineering design principle when storing software-as-a-service data. In the general case, goals of efficiency and malleability are fundamentally in conflict.
- Published
- 2016
28. Sparsity-Driven Synthetic Aperture Radar Imaging: Reconstruction, autofocusing, moving targets, and compressed sensing
- Author
-
Mujdat Cetin, Ivana Stojanovic, William Clement Karl, Özben Naime Önhon, Sadegh Samadi, Alan S. Willsky, Kush R. Varshney, Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science, Willsky Alan, and Willsky, Alan S.
- Subjects
Image formation ,Synthetic aperture radar ,Computer science ,business.industry ,Applied Mathematics ,Side looking airborne radar ,Iterative reconstruction ,TK Electrical engineering. Electronics Nuclear engineering ,Inverse synthetic aperture radar ,Computer Science::Graphics ,Radar engineering details ,Compressed sensing ,Computer Science::Computer Vision and Pattern Recognition ,Radar imaging ,Signal Processing ,Computer vision ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,Physics::Atmospheric and Oceanic Physics - Abstract
This article presents a survey of recent research on sparsity-driven synthetic aperture radar (SAR) imaging. In particular, it reviews 1) the analysis and synthesis- based sparse signal representation formulations for SAR image formation together with the associated imaging results, 2) sparsity-based methods for wide-angle SAR imaging and anisotropy characterization, 3) sparsity-based methods for joint imaging and autofocusing from data with phase errors, 4) techniques for exploiting sparsity for SAR imaging of scenes containing moving objects, and 5) recent work on compressed sensing (CS)-based analysis and design of SAR sensing missions.
- Published
- 2014
29. Collaborative Kalman Filtering for Dynamic Matrix Factorization
- Author
-
Kush R. Varshney, Dhruv Parthasarathy, and John Z. Sun
- Subjects
Moving horizon estimation ,Extended Kalman filter ,Computer science ,Signal Processing ,Collaborative filtering ,Fast Kalman filter ,Kalman filter ,Data mining ,Electrical and Electronic Engineering ,Recommender system ,computer.software_genre ,computer ,Matrix decomposition - Abstract
We propose a new algorithm for estimation, prediction, and recommendation named the collaborative Kalman filter. Suited for use in collaborative filtering settings encountered in recommendation systems with significant temporal dynamics in user preferences, the approach extends probabilistic matrix factorization in time through a state-space model. This leads to an estimation procedure with parallel Kalman filters and smoothers coupled through item factors. Learning of global parameters uses the expectation-maximization algorithm. The method is compared to existing techniques and performs favorably on both generated data and real-world movie recommendation data.
- Published
- 2014
30. The Wiring Economy Principle for Designing Inference Networks
- Author
-
Lav R. Varshney
- Subjects
Random graph ,Network planning and design ,Quantitative Biology::Neurons and Cognition ,Economy ,Computer Networks and Communications ,Computer science ,Node (networking) ,Inference ,Electrical and Electronic Engineering ,Network topology ,Integer programming - Abstract
The wiring economy principle in neuroscience has explained many experimentally observed properties of neuronal networks by asserting the need to keep the axons and dendrites that connect neurons small in length. Just like neuronal networks, many distributed systems are physical constructs that incur deployment and maintenance costs for their communication infrastructure. Taking wiring economy as a design goal for engineering systems that perform distributed coordination and inference, this paper formulates and studies the tradeoff between performance and wiring cost. It is shown that separated communication topology design and physical node placement yields optimal design. Designing optimal networks is shown to be NP-complete. The natural relaxation to the integer network design problem is shown to be a reverse convex program. Small optimal networks are computed. Optimally placed random network topologies are demonstrated to have good performance.
- Published
- 2013
31. An Information-Theoretic Characterization of Channels That Die
- Author
-
Vivek K Goyal, Lav R. Varshney, and Sanjoy K. Mitter
- Subjects
Channel code ,Computer science ,Volume (computing) ,020302 automobile design & engineering ,020206 networking & telecommunications ,02 engineering and technology ,Library and Information Sciences ,Communications system ,Die (integrated circuit) ,Computer Science Applications ,0203 mechanical engineering ,Transmission (telecommunications) ,0202 electrical engineering, electronic engineering, information engineering ,State (computer science) ,Algorithm ,Computer Science::Information Theory ,Information Systems ,Communication channel - Abstract
Given the possibility of communication systems failing catastrophically, we investigate limits to communicating over channels that fail at random times. These channels are finite-state semi-Markov channels. We show that communication with arbitrarily small probability of error is not possible. Making use of results in finite blocklength channel coding, we determine sequences of blocklengths that optimize transmission volume communicated at fixed maximum message error probabilities. We provide a partial ordering of communication channels. A dynamic programming formulation is used to show the structural result that channel state feedback does not improve performance.
- Published
- 2012
32. Generalization Error of Linear Discriminant Analysis in Spatially-Correlated Sensor Networks
- Author
-
Kush R. Varshney
- Subjects
Approximation theory ,Random field ,Covariance matrix ,business.industry ,Geometric probability ,Pattern recognition ,Linear discriminant analysis ,symbols.namesake ,Statistical learning theory ,Signal Processing ,symbols ,Artificial intelligence ,Electrical and Electronic Engineering ,Round-off error ,business ,Gaussian process ,Mathematics - Abstract
Generalization error, the probability of error of a detection rule learned from training samples on new unseen samples, is a fundamental quantity to be characterized. However, characterizations of generalization error in the statistical learning theory literature are often loose and practically unusable for optimizing detection systems. In this work, focusing on learning linear discriminant analysis detection rules from spatially-correlated sensor measurements, a tight generalization error approximation is developed that can be used to optimize the parameters of a sensor network detection system. As such, the approximation is used to optimize network settings. The approximation is also used to derive a detection error exponent and select an optimal subset of deployed sensor nodes. A Gauss-Markov random field is used to model correlation and weak laws of large numbers in geometric probability are employed in the analysis.
- Published
- 2012
33. Business Analytics Based on Financial Time Series
- Author
-
Aleksandra Mojsilovic and Kush R. Varshney
- Subjects
Finance ,business.industry ,Artifact-centric business process model ,Applied Mathematics ,02 engineering and technology ,Demand forecasting ,01 natural sciences ,010104 statistics & probability ,Business analytics ,Analytics ,New business development ,Signal Processing ,Business intelligence ,0202 electrical engineering, electronic engineering, information engineering ,Revenue ,020201 artificial intelligence & image processing ,0101 mathematics ,Electrical and Electronic Engineering ,business ,Enterprise resource planning - Abstract
Baniya merchants of the Mughal Empire, burgher merchants of the Swedish Empire, and chonin merchants of the Tokugawa Shogunate had the same questions on their mind as business people do today. To which townspeople should I sell my wares? Of folks that buy from me, are there any that might stop buying from me? Which groups buy which goods? Which saris should I show Ranna Devi to make as much money as I can? How much timber will people want in the coming weeks and months? The world has changed over the centuries with globalization, rapid transportation, instantaneous communication, expansive enterprises, and an explosion of data and signals along with ample computation to process them. In this new age, many continue to answer the aforementioned and other critical business questions in the old-fashioned way, i.e., based on intuition, gut instinct, and personal experience. In our globalized world, however, this is not sufficient anymore and it is essential to replace the business person's gut instinct with science. That science is business analytics. Business analytics is a broad umbrella entailing many problems and solutions, such as demand forecasting and conditioning, resource capacity planning, workforce planning, salesforce modeling and optimization, revenue forecasting, customer/product analytics, and enterprise recommender systems. In our department, we are in creasingly directing our focus on developing models and techniques to address such business problems. The goal of this article is to provide the reader with an overview of this interesting new area of research and then hone in on applications that might require the use of sophisticated signal processing methodologies and utilize financial signals as input.
- Published
- 2011
34. Performance of LDPC Codes Under Faulty Iterative Decoding
- Author
-
Lav R. Varshney
- Subjects
FOS: Computer and information sciences ,Theoretical computer science ,Noise measurement ,Information Theory (cs.IT) ,Computer Science - Information Theory ,Computation ,Data_CODINGANDINFORMATIONTHEORY ,Library and Information Sciences ,Computer Science Applications ,Communication theory ,Nonlinear system ,Noise ,Low-density parity-check code ,Error detection and correction ,Algorithm ,Decoding methods ,Computer Science::Information Theory ,Information Systems ,Mathematics - Abstract
Departing from traditional communication theory where decoding algorithms are assumed to perform without error, a system where noise perturbs both computational devices and communication channels is considered here. This paper studies limits in processing noisy signals with noisy circuits by investigating the effect of noise on standard iterative decoders for low-density parity-check codes. Concentration of decoding performance around its average is shown to hold when noise is introduced into message-passing and local computation. Density evolution equations for simple faulty iterative decoders are derived. In one model, computing nonlinear estimation thresholds shows that performance degrades smoothly as decoder noise increases, but arbitrarily small probability of error is not achievable. Probability of error may be driven to zero in another system model; the decoding threshold again decreases smoothly with decoder noise. As an application of the methods developed, an achievability result for reliable memory systems constructed from unreliable components is provided., Revised in May 2010 in response to reviewer comments
- Published
- 2011
35. Concentric Permutation Source Codes
- Author
-
Vivek K Goyal, Ha Q. Nguyen, and Lav R. Varshney
- Subjects
FOS: Computer and information sciences ,Linde–Buzo–Gray algorithm ,Discrete mathematics ,Convex hull ,Information Theory (cs.IT) ,Computer Science - Information Theory ,Quantization (signal processing) ,010102 general mathematics ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Vector quantization ,Codebook ,020206 networking & telecommunications ,02 engineering and technology ,01 natural sciences ,Group code ,0202 electrical engineering, electronic engineering, information engineering ,Algorithm design ,Entropy encoding ,0101 mathematics ,Electrical and Electronic Engineering ,Algorithm ,Computer Science::Information Theory ,Mathematics - Abstract
Permutation codes are a class of structured vector quantizers with a computationally-simple encoding procedure based on sorting the scalar components. Using a codebook comprising several permutation codes as subcodes preserves the simplicity of encoding while increasing the number of rate-distortion operating points, improving the convex hull of operating points, and increasing design complexity. We show that when the subcodes are designed with the same composition, optimization of the codebook reduces to a lower-dimensional vector quantizer design within a single cone. Heuristics for reducing design complexity are presented, including an optimization of the rate allocation in a shape-gain vector quantizer with gain-dependent wrapped spherical shape codebook.
- Published
- 2010
36. Generalized Water-filling for Source-aware Energy-efficient SRAMs
- Author
-
Mingu Kang, Yongjune Kim, Naresh R. Shanbhag, and Lav R. Varshney
- Subjects
FOS: Computer and information sciences ,Optimization problem ,Computer science ,Information Theory (cs.IT) ,Computer Science - Information Theory ,020206 networking & telecommunications ,02 engineering and technology ,Energy consumption ,020202 computer hardware & architecture ,Computer Science::Hardware Architecture ,Discrete optimization ,Convex optimization ,0202 electrical engineering, electronic engineering, information engineering ,Electrical and Electronic Engineering ,Greedy algorithm ,Algorithm ,Word (computer architecture) ,Random access ,Efficient energy use - Abstract
Conventional low-power static random access memories (SRAMs) reduce read energy by decreasing the bit-line voltage swings uniformly across the bit-line columns. This is because the read energy is proportional to the bit-line swings. On the other hand, bit-line swings are limited by the need to avoid decision errors especially in the most significant bits. We propose an information-theoretic approach to determine optimal non-uniform bit-line swings by formulating convex optimization problems. For a given constraint on mean squared error of retrieved words, we consider criteria to minimize energy (for low-power SRAMs), maximize speed (for high-speed SRAMs), and minimize energy-delay product. These optimization problems can be interpreted as classical water-filling, ground-flattening and water-filling, and sand-pouring and water-filling, respectively. By leveraging these interpretations, we also propose greedy algorithms to obtain optimized discrete swings. Numerical results show that energy-optimal swing assignment reduces energy consumption by half at a peak signal-to-noise ratio of 30dB for an 8-bit accessed word. The energy savings increase to four times for a 16-bit accessed word.
- Published
- 2018
37. Postarthroplasty Examination Using X-Ray Images
- Author
-
A. Kulski, J.-F. Deux, Kush R. Varshney, R. Raymond, N. Paragios, Alain Rahmouni, and P. Hernigou
- Subjects
medicine.medical_specialty ,Knee Joint ,Computer science ,medicine.medical_treatment ,Normal Distribution ,Prosthesis Implantation ,Kinematics ,Iterative reconstruction ,Prosthesis ,Imaging, Three-Dimensional ,Image Processing, Computer-Assisted ,medicine ,Humans ,Computer vision ,Leg Bones ,Electrical and Electronic Engineering ,Arthroplasty, Replacement, Knee ,Radiological and Ultrasound Technology ,business.industry ,Biomechanics ,Arthroplasty ,Computer Science Applications ,Feature (computer vision) ,Orthopedic surgery ,Artificial intelligence ,Tomography, X-Ray Computed ,business ,Software - Abstract
Arthroplasty, the implantation of prostheses into joints, is a surgical procedure that is affecting a larger and larger number of patients over time. As a result, it is increasingly important to develop imaging techniques to noninvasively examine joints with prostheses after surgery, both statically and dynamically in 3-D. The static problem is considered here, with the aim to create a 3-D shape model of the bone as well as the prosthesis using a set of 2-D X-rays from various viewpoints. The most important challenge to be addressed is the lack of texture, the most common feature to recover shape from multiple views. In order to overcome this limitation, we reformulate the problem using a novel multiview segmentation approach where an active contours 3-D surface evolution with level-set implementation is used to recover the shape of bones and prostheses in postoperative joints. The recovered shape may then be used to track 3-D motions in dynamic X-ray sequences to obtain kinematic information.
- Published
- 2009
38. Signal Processing for Social Good [In the Spotlight]
- Author
-
Kush R. Varshney
- Subjects
Sustainable development ,Signal processing ,Poverty ,Operations research ,business.industry ,Computer science ,Applied Mathematics ,Information processing ,Speech processing ,Signal ,Injustice ,law.invention ,law ,Signal Processing ,Electrical and Electronic Engineering ,Radar ,Telecommunications ,business - Abstract
Communication, speech processing, seismology, and radar are well known applications of signal processing that contribute to the betterment of humanity. But is there a more direct way that signal and information processing can reduce poverty, hunger, inequality, injustice, ill health, and other causes of human suffering? The member states of the United Nations ratified 17 sustainable development goals in 2015, which, if achieved by the targeted year 2030, will end or greatly curtail these problems. Achieving the global goals, however, will require cooperation from all, including the signal processing community.
- Published
- 2017
39. Bayes Risk Error is a Bregman Divergence
- Author
-
Kush R. Varshney
- Subjects
Bayes' rule ,Bayes' theorem ,Quantization (signal processing) ,Likelihood-ratio test ,Signal Processing ,Statistics ,Bayesian probability ,Statistics::Methodology ,Bayes error rate ,Detection theory ,Electrical and Electronic Engineering ,Bregman divergence ,Mathematics - Abstract
In previous work reported in these Transactions, we proposed a new distortion measure for the quantization of prior probabilities that are used in the threshold of likelihood ratio test detection: Bayes risk error. In this correspondence, we show that the Bayes risk error is a member of the class of Bregman divergences and discuss the implications of this fact.
- Published
- 2011
40. Sensitivity of Quadratic Gaussian Matching to Interference
- Author
-
Lav R. Varshney and Sanjoy K. Mitter
- Subjects
business.industry ,Gaussian ,Noise (electronics) ,Computer Science Applications ,symbols.namesake ,Channel capacity ,Interference (communication) ,Transmission (telecommunications) ,Modeling and Simulation ,symbols ,Sensitivity (control systems) ,Electrical and Electronic Engineering ,Telecommunications ,business ,Algorithm ,Decoding methods ,Computer Science::Information Theory ,Mathematics ,Communication channel - Abstract
It is well known that uncoded transmission of a memoryless Gaussian source over a memoryless additive white Gaussian noise channel results in optimal performance theoretically attainable. When there is additional interference in the channel, uncoded transmission is robust. It achieves the same sensitivity performance as optimal performance, measured using sensitivity results of Pinsker, Prelov, and Verdu.
- Published
- 2011
41. International conference on systems theory and appliations
- Author
-
R. Varshney
- Subjects
Systems theory ,Management science ,Computer science ,Applied Mathematics ,Signal Processing ,Engineering ethics ,Electrical and Electronic Engineering ,Music - Published
- 1981
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.