4 results
Search Results
2. Consensus-Based Cooperative Algorithms for Training Over Distributed Data Sets Using Stochastic Gradients.
- Author
-
Li, Zhongguo, Liu, Bo, and Ding, Zhengtao
- Subjects
DISTRIBUTED algorithms ,ALGORITHMS ,TRACKING algorithms ,PRIVATE networks ,MACHINE learning ,ONLINE education - Abstract
In this article, distributed algorithms are proposed for training a group of neural networks with private data sets. Stochastic gradients are utilized in order to eliminate the requirement for true gradients. To obtain a universal model of the distributed neural networks trained using local data sets only, consensus tools are introduced to derive the model toward the optimum. Most of the existing works employ diminishing learning rates, which are often slow and impracticable for online learning, while constant learning rates are studied in some recent works, but the principle for choosing the rates is not well established. In this article, constant learning rates are adopted to empower the proposed algorithms with tracking ability. Under mild conditions, the convergence of the proposed algorithms is established by exploring the error dynamics of the connected agents, which provides an upper bound for selecting the constant learning rates. Performances of the proposed algorithms are analyzed with and without gradient noises, in the sense of mean square error (MSE). It is proved that the MSE converges with bounded errors determined by the gradient noises, and the MSE converges to zero if the gradient noises are absent. Simulation results are provided to validate the effectiveness of the proposed algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
3. Tight Bounds on the Convergence Rate of Generalized Ratio Consensus Algorithms.
- Author
-
Gerencser, Balazs and Gerencser, Laszlo
- Subjects
DISTRIBUTED algorithms ,RANDOM matrices ,NONNEGATIVE matrices ,ALGORITHMS ,VALUATION of real property ,SYMMETRIC matrices ,RANDOM graphs - Abstract
The problems discussed in this article are motivated by general ratio consensus algorithms, introduced by Kempe et al. in 2003 in a simple form as the push-sum algorithm, later extended by Bénézit et al. in 2010 under the name weighted gossip algorithm. We consider a communication protocol described by a strictly stationary, ergodic, sequentially primitive sequence of nonnegative matrices, applied iteratively to a pair of fixed initial vectors, the components of which are called values and weights defined at the nodes of a network. The subject of ratio consensus problems is to study the asymptotic properties of ratios of values and weights at each node, expecting convergence to the same limit for all nodes. The main results of this article provide upper bounds for the rate of the almost sure exponential convergence in terms of the spectral gap associated with the given sequence of random matrices. It will be shown that these upper bounds are sharp. Our results complement previous results of Picci and Taylor in 2013 and Iutzeler et al. in 2013. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
4. An Accelerated Algorithm for Linear Quadratic Optimal Consensus of Heterogeneous Multiagent Systems.
- Author
-
Wang, Qishao, Duan, Zhisheng, Wang, Jingyao, Wang, Qingyun, and Chen, Guanrong
- Subjects
MULTIAGENT systems ,DISTRIBUTED algorithms ,ALGORITHMS ,PROBLEM solving ,INFORMATION storage & retrieval systems ,NONLINEAR equations - Abstract
An accelerated algorithm is proposed in this article for solving the linear quadratic optimal consensus problem of multiagent systems. To optimize the linear quadratic response and the final consensus state simultaneously, a nonseparable multiobjective optimization problem with coupled constraints on decision variables is formulated. The main difficulty in solving the optimization problem lies in the nonlinear coupling of objectives, which is overcome by separating the problem into two independent and solvable single-objective optimization subproblems using the alternating direction method of multipliers. The proximal gradient decent scheme is then introduced to approximate the precise optimal solutions of the subproblems so as to improve the computing efficiency. Convergence analysis is performed to estimate the convergence rate and derive the convergence condition, which is independent of any global information of the system and, therefore, is fully distributed. Furthermore, the solution of each subproblem is obtained in a distributed form, allowing the multiagent system to achieve optimal consensus. Numerical examples show the effectiveness of the accelerated algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.