Back to Search Start Over

Differentially private distributed online optimization via push-sum one-point bandit dual averaging.

Authors :
Zhao, Zhongyuan
Yang, Ju
Gao, Wang
Wang, Yan
Wei, Mengli
Source :
Neurocomputing. Mar2024, Vol. 572, pN.PAG-N.PAG. 1p.
Publication Year :
2024

Abstract

This paper focuses on the distributed online optimization problem in multi-agent systems considering privacy preservation. Each agent exchanges local information with neighboring agents on the strongly connected time-varying directed graphs. Since the process of information transmission is prone to information leakage, a distributed push-sum dual averaging algorithm based on the differential privacy mechanism is proposed to protect the privacy of the data. In addition, to handle situations where the gradient information of the node cost function is unknown, the one-point gradient estimation is designed to calculate the true gradient information and guide the update of the decision variables. With the appropriate choice of the stepsizes and the exploration parameters, the algorithm can effectively protect the privacy of agents while achieving sublinear regret with the convergence rate O (T 3 4 ). Furthermore, this paper also explores the effect of one-point estimation parameters on the regret in the online setting and investigates the relation between the convergence effect of individual regret and differential privacy levels. Finally, several federated learning experiments were conducted to verify the efficacy of the algorithm. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
09252312
Volume :
572
Database :
Academic Search Index
Journal :
Neurocomputing
Publication Type :
Academic Journal
Accession number :
174917077
Full Text :
https://doi.org/10.1016/j.neucom.2023.127184