Back to Search Start Over

Dynamic Social Learning Under Graph Constraints

Authors :
Konstantin Avrachenkov
Suhail M. Shah
Sharayu Moharir
Vivek S. Borkar
Network Engineering and Operations (NEO )
Inria Sophia Antipolis - Méditerranée (CRISAM)
Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria)
Department of Electrical Engineering [IIT-Bombay] (EE-IIT)
Indian Institute of Technology Kanpur (IIT Kanpur)
Hong Kong University of Science and Technology (HKUST)
This work was supported by the grant `Machine Learning for NetworkAnalytics' from the Indo-French Centre for Promotion of AdvancedScientific Research (Cefipra) and by the grant `Distributed Learning and Controlfor Network Analysis' from Nokia Bell Labs.
The Hong Kong University of Sciences & Technology (HKUST)
Source :
IEEE Transactions on Control of Network Systems, IEEE Transactions on Control of Network Systems, 2022, 9 (3), pp.1435-1446. ⟨10.1109/TCNS.2021.3114377⟩, IEEE Transactions on Control of Network Systems, IEEE, 2021, ⟨10.1109/TCNS.2021.3114377⟩
Publication Year :
2022
Publisher :
Institute of Electrical and Electronics Engineers (IEEE), 2022.

Abstract

International audience; We introduce a model of graph-constrained dynamic choice with reinforcement modeled by positively $\alpha$-homogeneous rewards. We show that its empirical process, which can be written as a stochastic approximation recursion with Markov noise, has the same probability law as a certain vertex reinforced random walk. We use this equivalence to show that for $\alpha > 0$, the asymptotic outcome concentrates around the optimum in a certain limiting sense when 'annealed' by letting $\alpha \to \infty$ slowly.

Details

ISSN :
23722533 and 23255870
Volume :
9
Database :
OpenAIRE
Journal :
IEEE Transactions on Control of Network Systems
Accession number :
edsair.doi.dedup.....b12c430d9ea1b5f9dc499bfed0533f1c
Full Text :
https://doi.org/10.1109/tcns.2021.3114377