14 results
Search Results
2. Push-Sum on Random Graphs: Almost Sure Convergence and Convergence Rate.
- Author
-
Rezaienia, Pouya, Gharesifard, Bahman, Linder, Tamas, and Touri, Behrouz
- Subjects
ALGORITHMS ,DIRECTED graphs ,HEURISTIC algorithms ,TELECOMMUNICATION systems ,MATHEMATICAL optimization ,RANDOM graphs - Abstract
In this paper, we study the problem of achieving average consensus over a random time-varying sequence of directed graphs by extending the class of so-called push-sum algorithms to such random scenarios. Provided that an ergodicity notion, which we term the directed infinite flow property, holds and the auxiliary states of agents are uniformly bounded away from zero infinitely often, we prove the almost sure convergence of the evolutions of this class of algorithms to the average of initial states. Moreover, for a random sequence of graphs generated using a so-called time-varying $B$ -irreducible probability matrix, we establish convergence rates for the proposed push-sum algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
3. Linear Convergence and Metric Selection for Douglas-Rachford Splitting and ADMM.
- Author
-
Giselsson, Pontus and Boyd, Stephen
- Subjects
LINEAR systems ,STOCHASTIC convergence ,MULTIPLIERS (Mathematical analysis) ,MATHEMATICAL optimization ,ALGORITHMS - Abstract
Recently, several convergence rate results for Douglas-Rachford splitting and the alternating direction method of multipliers (ADMM) have been presented in the literature. In this paper, we show global linear convergence rate bounds for Douglas-Rachford splitting and ADMM under strong convexity and smoothness assumptions. We further show that the rate bounds are tight for the class of problems under consideration for all feasible algorithm parameters. For problems that satisfy the assumptions, we show how to select step-size and metric for the algorithm that optimize the derived convergence rate bounds. For problems with a similar structure that do not satisfy the assumptions, we present heuristic step-size and metric selection methods. [ABSTRACT FROM PUBLISHER]
- Published
- 2017
- Full Text
- View/download PDF
4. Optimizing Leader Influence in Networks Through Selection of Direct Followers.
- Author
-
Mai, Van Sy and Abed, Eyad H.
- Subjects
GREEDY algorithms ,MATHEMATICAL optimization ,MATHEMATICAL models ,ALGORITHMS ,MATHEMATICAL analysis - Abstract
This paper considers the problem of a leader that seeks to optimally influence the opinions of agents in a directed network through connecting with a limited number of the agents (“direct followers”), possibly in the presence of a fixed competing leader. The settings involving a single leader and two competing leaders are unified into a general combinatoric optimization problem, for which two heuristic approaches are developed. The first approach is based on a convex relaxation scheme, possibly in combination with the $\ell _1$ -norm regularization technique, and the second is based on a greedy selection strategy. The main technical novelties of this work are in the establishment of supermodularity of the objective function and convexity of its continuous relaxation. The greedy approach is guaranteed to have a lower bound on the approximation ratio sharper than $(1-1/e)$ , while the convex approach can benefit from efficient (customized) numerical solvers to have practically comparable solutions possibly with faster computation times. The two approaches can be combined to provide improved results. In numerical examples, the approximation ratio can be made to reach ${\text{90}\%}$ or higher depending on the number of direct followers. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
5. Accelerated Distributed MPC of Linear Discrete-Time Systems With Coupled Constraints.
- Author
-
Wang, Zheming and Ong, Chong-Jin
- Subjects
PREDICTIVE control systems ,DISCRETE-time systems ,ALGORITHMS ,MATHEMATICAL optimization ,LAGRANGIAN functions ,CONSTRAINTS (Physics) - Abstract
This paper proposes a distributed model predictive control (MPC) approach for a family of discrete-time linear systems with local (uncoupled) and global (coupled) constraints. The proposed approach is based on the dual problem of an overall MPC optimization problem involving all systems, which is then solved distributively using a modified distributed Nesterov-accelerated-gradient algorithm. To further reduce the computational requirement, this approach allows for early termination of the distributed gradient algorithm. This is made possible via a consensus algorithm that determines the satisfaction of the termination condition and by appropriate tightening of the coupled constraints. Under reasonable assumptions, the approach is able to produce a suboptimal solution as long as the network of the systems is connected while ensuring recursive feasibility and exponential stability of the closed-loop system. The performance of the proposed approach is demonstrated by a numerical example. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
6. Distributed Constrained Optimization Over Unbalanced Directed Networks Using Asynchronous Broadcast-Based Algorithm.
- Author
-
Li, Huaqing, Lu, Qingguo, Chen, Guo, Huang, Tingwen, and Dong, Zhaoyang
- Subjects
CONSTRAINED optimization ,DISTRIBUTED algorithms ,COST functions ,ALGORITHMS ,MATHEMATICAL optimization ,PROBLEM solving ,CONVEX sets - Abstract
This article focuses on distributed convex optimization problems over an unbalanced directed multiagent (no central coordinator) network with inequality constraints. The goal is to cooperatively minimize the sum of all locally known convex cost functions. Every single agent in the network only knows its local objective function and local inequality constraint, and is constrained to a privately known convex set. Furthermore, we particularly discuss the scenario in which the interactions among agents over the whole network are subjected to possible link failures. To collaboratively solve the optimization problem, we mainly concentrate on an epigraph form of the original constrained optimization to overcome the unbalancedness of directed networks, and propose a new distributed asynchronous broadcast-based optimization algorithm. The algorithm allows that not only the updates of agents are asynchronous in a distributed fashion, but also the step-sizes of all agents are uncoordinated. An important characteristic of the proposed algorithm is to cope with the constrained optimization problem in the case of unbalanced directed networks whose communications are subjected to possible link failures. Under two standard assumptions that the communication network is directly strongly connected and the subgradients of all local objective functions are bounded, we provide an explicit analysis for convergence of the algorithm. Simulation results obtained by three numerical experiments substantiate the feasibility of the algorithm and validate the theoretical findings. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
7. Distributed Balancing With Constrained Integer Weights.
- Author
-
Rikos, Apostolos I. and Hadjicostis, Christoforos N.
- Subjects
NUMERICAL analysis ,MATHEMATICAL analysis ,ALGORITHMS ,MATHEMATICAL optimization ,INTEGERS - Abstract
We consider the distributed integer-weight-balancing problem in networks of nodes that are interconnected via directed edges, each able to admit a positive integer weight (or flow) within a certain interval, captured by lower and upper limits. We propose and analyze distributed iterative algorithms for obtaining admissible and balanced integer weights, i.e., integer weights within the given interval constraints, such that, for each node, the sum of the weights of the incoming edges equals the sum of the weights of the outgoing edges. The proposed algorithms assume that communication among pairs of nodes that are interconnected is bidirectional and are shown to lead to a set of admissible and balanced integer weights after a finite number of time steps (as long as the necessary and sufficient integer circulation conditions are satisfied on the given digraph). [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
8. Symbolic Optimal Control.
- Author
-
Reissig, Gunther and Rungger, Matthias
- Subjects
OPTIMAL control theory ,MATHEMATICAL optimization ,DISCRETIZATION methods ,ALGORITHMS ,NUMERICAL analysis - Abstract
We present novel results on the solution of a class of leavable, undiscounted optimal control problems in the minimax sense for nonlinear, continuous-state, discrete-time plants. The problem class includes entry-(exit-)time problems as well as minimum-time, pursuit-evasion, and reach-avoid games as special cases. We utilize auxiliary optimal control problems (“abstractions”) to compute both upper bounds of the value function, i.e., of the achievable closed-loop performance, and symbolic feedback controllers realizing those bounds. The abstractions are obtained from discretizing the problem data, and we prove that the computed bounds and the performance of the symbolic controllers converge to the value function as the discretization parameters approach zero. In particular, if the optimal control problem is solvable on some compact subset of the state space, and if the discretization parameters are sufficiently small, then we obtain a symbolic feedback controller solving the problem on that subset. These results do not assume the continuity of the value function or any problem data, and they fully apply in the presence of hard state and control constraints. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
9. Accelerated Dual Descent for Network Flow Optimization.
- Author
-
Zargham, Michael, Ribeiro, Alejandro, Ozdaglar, Asuman, and Jadbabaie, Ali
- Subjects
MATHEMATICAL optimization ,ALGORITHMS ,MATRICES (Mathematics) ,STOCHASTIC convergence ,NUMERICAL analysis - Abstract
We present a fast distributed solution to the convex network flow optimization problem. Our approach uses a family of dual descent algorithms that approximate the Newton direction to achieve faster convergence rates than existing distributed methods. The approximate Newton directions are obtained through matrix splitting techniques and sparse Taylor approximations of the inverse Hessian. We couple this descent direction with a distributed line search algorithm which requires the same information as our descent direction to compute. We show that, similarly to conventional Newton methods, the proposed algorithm exhibits super-linear convergence within a neighborhood of the optimal value. Numerical experiments corroborate that convergence times are between one to two orders of magnitude faster than existing distributed optimization methods. A connection with recent developments that use consensus to compute approximate Newton directions is also presented. [ABSTRACT FROM PUBLISHER]
- Published
- 2014
- Full Text
- View/download PDF
10. Decentralized Prediction-Correction Methods for Networked Time-Varying Convex Optimization.
- Author
-
Simonetto, Andrea, Koppel, Alec, Mokhtari, Aryan, Leus, Geert, and Ribeiro, Alejandro
- Subjects
MATHEMATICAL optimization ,WIRELESS sensor networks ,STOCHASTIC convergence ,CONVERGENCE (Telecommunication) ,ALGORITHMS - Abstract
We develop algorithms that find and track the optimal solution trajectory of time-varying convex optimization problems that consist of local and network-related objectives. The algorithms are derived from the prediction-correction methodology, which corresponds to a strategy where the time-varying problem is sampled at discrete time instances, and then, a sequence is generated via alternatively executing predictions on how the optimizers at the next time sample are changing and corrections on how they actually have changed. Prediction is based on how the optimality conditions evolve in time, while correction is based on a gradient or Newton method, leading to decentralized prediction-correction gradient and decentralized prediction-correction Newton. We extend these methods to cases where the knowledge on how the optimization programs are changing in time is only approximate and propose decentralized approximate prediction-correction gradient and decentralized approximate prediction-correction Newton. Convergence properties of all the proposed methods are studied and empirical performance is shown on an application of a resource allocation problem in a wireless network. We observe that the proposed methods outperform existing running algorithms by orders of magnitude. The numerical results showcase a tradeoff between convergence accuracy, sampling period, and network communications. [ABSTRACT FROM PUBLISHER]
- Published
- 2017
- Full Text
- View/download PDF
11. Dual Averaging for Distributed Optimization: Convergence Analysis and Network Scaling.
- Author
-
Duchi, John C., Agarwal, Alekh, and Wainwright, Martin J.
- Subjects
STOCHASTIC convergence ,MATHEMATICAL optimization ,STOCHASTIC processes ,CONVEX functions ,MACHINE learning ,ALGORITHMS ,ELECTRIC network topology - Abstract
The goal of decentralized optimization over a network is to optimize a global objective formed by a sum of local (possibly nonsmooth) convex functions using only local computation and communication. It arises in various application domains, including distributed tracking and localization, multi-agent coordination, estimation in sensor networks, and large-scale machine learning. We develop and analyze distributed algorithms based on dual subgradient averaging, and we provide sharp bounds on their convergence rates as a function of the network size and topology. Our analysis allows us to clearly separate the convergence of the optimization algorithm itself and the effects of communication dependent on the network structure. We show that the number of iterations required by our algorithm scales inversely in the spectral gap of the network, and confirm this prediction's sharpness both by theoretical lower bounds and simulations for various networks. Our approach includes the cases of deterministic optimization and communication, as well as problems with stochastic optimization and/or communication. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
12. Fast-Lipschitz Optimization With Wireless Sensor Networks Applications.
- Author
-
Fischione, Carlo
- Subjects
MATHEMATICAL optimization ,WIRELESS sensor networks ,STOCHASTIC convergence ,ELECTRIC interference ,RADIO transmitter-receivers ,ALGORITHMS ,COMPUTATIONAL complexity ,LAGRANGE equations ,CONSTRAINT satisfaction - Abstract
Motivated by the need for fast computations in wireless sensor networks, the new F-Lipschitz optimization theory is introduced for a novel class of optimization problems. These problems are defined by simple qualifying properties specified in terms of increasing objective function and contractive constraints. It is shown that feasible F-Lipschitz problems have always a unique optimal solution that satisfies the constraints at equality. The solution is obtained quickly by asynchronous algorithms of certified convergence. F-Lipschitz optimization can be applied to both centralized and distributed optimization. Compared to traditional Lagrangian methods, which often converge linearly, the convergence time of centralized F-Lipschitz problems is at least superlinear. Distributed F-Lipschitz algorithms converge fast, as opposed to traditional Lagrangian decomposition and parallelization methods, which generally converge slowly and at the price of many message passings. In both cases, the computational complexity is much lower than traditional Lagrangian methods. Examples of application of the new optimization method are given for distributed estimation and radio power control in wireless sensor networks. The drawback of the F-Lipschitz optimization is that it might be difficult to check the qualifying properties. For more general optimization problems, it is suggested that it is convenient to have conditions ensuring that the solution satisfies the constraints at equality. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
13. Robust Tubes in Nonlinear Model Predictive Control.
- Author
-
Cannon, Mark, Buerger, Johannes, Kouvaritakis, Basil, and Rakovic, Saša
- Subjects
ROBUST control ,NONLINEAR statistical models ,PREDICTIVE control systems ,MATHEMATICAL optimization ,ELECTRON tubes ,STOCHASTIC convergence ,COMPUTER simulation ,NONLINEAR systems ,ALGORITHMS - Abstract
Nonlinear model predictive control (NMPC) strategies based on linearization about predicted system trajectories enable the online NMPC optimization to be performed by a sequence of convex optimization problems. The approach relies on bounds on linearization errors in order to ensure constraint satisfaction and convergence of the performance index, both during the optimization at each sampling instant and along closed loop system trajectories. This technical note proposes bounds based on robust tubes constructed around predicted trajectories. To ensure local optimality, the bounds are non-conservative for the case of zero linearization error, which requires the tube cross sections to vary along predicted trajectories. The feasibility, stability and convergence properties of the algorithm are established without the need for predictions to satisfy local optimality criteria. The strategy is illustrated by numerical examples. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
14. Asynchronous Broadcast-Based Convex Optimization Over a Network.
- Author
-
Nedic, Angelia
- Subjects
MARKOV processes ,SYMMETRIC matrices ,STOCHASTIC convergence ,MATHEMATICAL optimization ,ALGORITHMS ,MULTIAGENT systems ,DETECTORS - Abstract
We consider a distributed multi-agent network system where each agent has its own convex objective function, which can be evaluated with stochastic errors. The problem consists of minimizing the sum of the agent functions over a commonly known constraint set, but without a central coordinator and without agents sharing the explicit form of their objectives. We propose an asynchronous broadcast-based algorithm where the communications over the network are subject to random link failures. We investigate the convergence properties of the algorithm for a diminishing (random) stepsize and a constant stepsize, where each agent chooses its own stepsize independently of the other agents. Under some standard conditions on the gradient errors, we establish almost sure convergence of the method to an optimal point for diminishing stepsize. For constant stepsize, we establish some error bounds on the expected distance from the optimal point and the expected function value. We also provide numerical results. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.