Back to Search Start Over

Social Learning and Distributed Hypothesis Testing.

Authors :
Lalitha, Anusha
Javidi, Tara
Sarwate, Anand D.
Source :
IEEE Transactions on Information Theory; Sep2018, Vol. 64 Issue 9, p6161-6179, 19p
Publication Year :
2018

Abstract

This paper considers a problem of distributed hypothesis testing over a network. Individual nodes in a network receive noisy local (private) observations whose distribution is parameterized by a discrete parameter (hypothesis). The marginals of the joint observation distribution conditioned on each hypothesis are known locally at the nodes, but the true parameter/hypothesis is not known. An update rule is analyzed in which nodes first perform a Bayesian update of their belief (distribution estimate) of each hypothesis based on their local observations, communicate these updates to their neighbors, and then perform a “non-Bayesian” linear consensus using the log-beliefs of their neighbors. Under mild assumptions, we show that the belief of any node on a wrong hypothesis converges to zero exponentially fast. We characterize the exponential rate of learning, which we call the network divergence, in terms of the nodes’ influence of the network and the divergences between the observations’ distributions. For a broad class of observation statistics which includes distributions with unbounded support such as Gaussian mixtures, we show that rate of rejection of wrong hypothesis satisfies a large deviation principle, i.e., the probability of sample paths on which the rate of rejection of wrong hypothesis deviates from the mean rate vanishes exponentially fast and we characterize the rate function in terms of the nodes’ influence of the network and the local observation models. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
00189448
Volume :
64
Issue :
9
Database :
Complementary Index
Journal :
IEEE Transactions on Information Theory
Publication Type :
Academic Journal
Accession number :
131346492
Full Text :
https://doi.org/10.1109/TIT.2018.2837050