207 results on '"primal-dual methods"'
Search Results
2. A nested primal–dual iterated Tikhonov method for regularized convex optimization
- Author
-
Aleotti, Stefano, Bonettini, Silvia, Donatelli, Marco, Prato, Marco, and Rebegoldi, Simone
- Published
- 2024
- Full Text
- View/download PDF
3. A projected-search interior-point method for nonlinearly constrained optimization.
- Author
-
Gill, Philip E. and Zhang, Minxin
- Subjects
INTERIOR-point methods ,CONSTRAINED optimization ,QUADRATIC programming ,NONLINEAR programming - Abstract
This paper concerns the formulation and analysis of a new interior-point method for constrained optimization that combines a shifted primal-dual interior-point method with a projected-search method for bound-constrained optimization. The method involves the computation of an approximate Newton direction for a primal-dual penalty-barrier function that incorporates shifts on both the primal and dual variables. Shifts on the dual variables allow the method to be safely "warm started" from a good approximate solution and avoids the possibility of very large solutions of the associated path-following equations. The approximate Newton direction is used in conjunction with a new projected-search line-search algorithm that employs a flexible non-monotone quasi-Armijo line search for the minimization of each penalty-barrier function. Numerical results are presented for a large set of constrained optimization problems. For comparison purposes, results are also given for two primal-dual interior-point methods that do not use projection. The first is a method that shifts both the primal and dual variables. The second is a method that involves shifts on the primal variables only. The results show that the use of both primal and dual shifts in conjunction with projection gives a method that is more robust and requires significantly fewer iterations. In particular, the number of times that the search direction must be computed is substantially reduced. Results from a set of quadratic programming test problems indicate that the method is particularly well-suited to solving the quadratic programming subproblem in a sequential quadratic programming method for nonlinear optimization. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
4. Near-optimal tensor methods for minimizing the gradient norm of convex functions and accelerated primal–dual tensor methods.
- Author
-
Dvurechensky, Pavel, Ostroukhov, Petr, Gasnikov, Alexander, Uribe, César A., and Ivanova, Anastasiya
- Abstract
Motivated, in particular, by the entropy-regularized optimal transport problem, we consider convex optimization problems with linear equality constraints, where the dual objective has Lipschitz
p th order derivatives, and develop two approaches for solving such problems. The first approach is based on the minimization of the norm of the gradient in the dual problem and then the reconstruction of an approximate primal solution. Recently, Grapiglia and Nesterov [Tensor methods for finding approximate stationary points of convex functions , Optim. Methods Softw. (2020), pp. 1–34] showed lower complexity bounds for the problem of minimizing the gradient norm of the function with Lipschitzp th order derivatives. Still, the question of optimal or near-optimal methods remained open as the algorithms presented in [Grapiglia and Nesterov,Tensor methods for finding approximate stationary points of convex functions , Optim. Methods Softw. (2020), pp. 1–34] achieve suboptimal bounds only. We close this gap by proposing two near-optimal (up to logarithmic factors) methods with complexity bounds $ \tilde {O}(\varepsilon ^{-2(p+1)/(3p+1)}) $ O~(ε−2(p+1)/(3p+1)) and $ \tilde {O}(\varepsilon ^{-2/(3p+1)}) $ O~(ε−2/(3p+1)) with respect to the initial objective residual and the distance between the starting point and solution, respectively. We then apply these results (having independent interest) to our primal–dual setting. As the second approach, we propose a direct accelerated primal–dual tensor method for convex problems with linear equality constraints, where the dual objective has Lipschitzp th order derivatives. For this algorithm, we prove $ \tilde O (\varepsilon ^{-1 / (p + 1)}) $ O~(ε−1/(p+1)) complexity in terms of the duality gap and the residual in the constraints. We illustrate the practical performance of the proposed algorithms in experiments on logistic regression, entropy-regularized optimal transport problem, and the minimal mutual information problem. [ABSTRACT FROM AUTHOR]- Published
- 2024
- Full Text
- View/download PDF
5. Chambolle–Pock's Primal-Dual Method with Mismatched Adjoint.
- Author
-
Lorenz, Dirk A. and Schneppe, Felix
- Subjects
- *
COMPUTED tomography , *LINEAR operators , *CONVEX sets , *ADJOINT differential equations - Abstract
The primal-dual method of Chambolle and Pock is a widely used algorithm to solve various optimization problems written as convex-concave saddle point problems. Each update step involves the application of both the forward linear operator and its adjoint. However, in practical applications like computerized tomography, it is often computationally favourable to replace the adjoint operator by a computationally more efficient approximation. This leads to an adjoint mismatch in the algorithm. In this paper, we analyze the convergence of Chambolle–Pock's primal-dual method under the presence of a mismatched adjoint in the strongly convex setting. We present an upper bound on the error of the primal solution and derive stepsizes and mild conditions under which convergence to a fixed point is still guaranteed. Furthermore we show linear convergence similar to the result of Chambolle–Pock's primal-dual method without the adjoint mismatch. Moreover, we illustrate our results both for an academic and a real-world inspired application. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
6. A primal–dual penalty method via rounded weighted-ℓ1 Lagrangian duality.
- Author
-
Burachik, R. S., Kaya, C. Y., and Price, C. J.
- Subjects
- *
CONSTRAINED optimization , *KISSING - Abstract
We propose a new duality scheme based on a sequence of smooth minorants of the weighted- ℓ 1 penalty function, interpreted as a parametrized sequence of augmented Lagrangians, to solve non-convex constrained optimization problems. For the induced sequence of dual problems, we establish strong asymptotic duality properties. Namely, we show that (i) the sequence of dual problems is convex and (ii) the dual values monotonically increase to the optimal primal value. We use these properties to devise a subgradient based primal–dual method, and show that the generated primal sequence accumulates at a solution of the original problem. We illustrate the performance of the new method with three different types of test problems: A polynomial non-convex problem, large-scale instances of the celebrated kissing number problem, and the Markov–Dubins problem. Our numerical experiments demonstrate that, when compared with the traditional implementation of a well-known smooth solver, our new method (using the same well-known solver in its subproblem) can find better quality solutions, i.e. 'deeper' local minima, or solutions closer to the global minimum. Moreover, our method seems to be more time efficient, especially when the problem has a large number of constraints. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
7. Data-Driven Minimax Optimization with Expectation Constraints
- Author
-
Yang, Shuoguang, Li, Xudong, Lan, Guanghui, Yang, Shuoguang, Li, Xudong, and Lan, Guanghui
- Abstract
Attention to data-driven optimization approaches, including the well-known stochastic gradient descent method, has grown significantly over recent decades, but data driven constraints have rarely been studied because of the computational challenges of projections onto the feasible set defined by these hard constraints. In this paper, we focus on the nonsmooth convex-concave stochastic minimax regime and formulate the data-driven constraints as expectation constraints. The minimax expectation constrained problem subsumes a broad class of real-world applications, including data-driven robust optimization, optimization with misspecification, and area under the receiver operating characteristic curve (AUC) maximization with fairness constraints. We propose a class of efficient primal-dual algorithms to tackle the minimax expectation constrained problem and show that our algo root ffiffififfi rithms converge at the optimal rate of O(1= N), where N is the number of iterations. We demonstrate the practical efficiency of our algorithms by conducting numerical experiments on large-scale real-world applications.
- Published
- 2024
8. Adaptive coordinate sampling for stochastic primal–dual optimization.
- Author
-
Liu, Huikang, Wang, Xiaolu, and So, Anthony Man‐Cho
- Subjects
ALGORITHMS ,MACHINE learning ,COST control ,FORECASTING - Abstract
We consider the regularized empirical risk minimization (ERM) of linear predictors, which arises in a variety of problems in machine learning and statistics. After reformulating the original ERM as a bilinear saddle‐point problem, we can apply stochastic primal–dual methods to solve it. Sampling the primal or dual coordinates with a fixed nonuniform distribution is usually employed to accelerate the convergence of the algorithm, but such a strategy only exploits the global information of the objective function. To capture its local structures, we propose an adaptive importance sampling strategy that chooses the coordinates based on delicately designed nonuniform and nonstationary distributions. When our adaptive coordinate sampling strategy is applied to the Stochastic Primal‐Dual Coordinate (SPDC), we prove that the resulting algorithm enjoys linear convergence. Moreover, we show that the ideal form of our adaptive sampling exhibits strictly sharper convergence rate under certain conditions compared with the vanilla SPDC. We also extend our sampling strategy to other algorithms including Doubly Stochastic Primal‐Dual Coordinate (DSPDC) and Stochastic Primal‐Dual with O(1) per‐iteration cost and Variance Reduction (SPD1‐VR), where both primal and dual coordinates are randomly sampled. Our experiment results show that the proposed strategy significantly improves the convergence performance of the methods when compared with existing sampling strategies. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
9. Distributed Composite Optimization Over Relay-Assisted Networks.
- Author
-
Shi, Chong-Xiao and Yang, Guang-Hong
- Subjects
- *
ALGORITHMS , *MULTISENSOR data fusion , *DISTRIBUTED algorithms , *CONVEX functions , *SIGNAL processing - Abstract
This article studies the distributed composite optimization problem over relay-assisted networks (DCOP-RNs). By combining the local primal–dual updates with a data fusion strategy, an effective distributed algorithm is developed to solve the DCOP-RN. Compared with the existing linearized alternating direction method of multipliers (ADMMs), this article provides a novel interpretation of the proposed algorithm. More specifically, it is shown that the proposed algorithm can be interpreted as a simplified proximal augmented Lagrangian method. Within this framework, a simplified convergence analysis of the algorithm is conducted, based on which the convergence to an optimal solution to the DCOP-RN is proved. Further, taking into account the limited bandwidth in real networks, this article proposes a quantized distributed algorithm to solve the DCOP-RN. Especially, a relationship between the quantization resolution and the convergence accuracy of the algorithm is established. Finally, simulation examples verify the theoretical results. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
10. A primal–dual interior point method for a novel type-2 second order cone optimization
- Author
-
Md Sarowar Morshed, Chrysafis Vogiatzis, and Md. Noor-E-Alam
- Subjects
Second order cone optimization ,Interior point methods ,Primal–dual methods ,Kernel functions ,Applied mathematics. Quantitative methods ,T57-57.97 - Abstract
In this paper, we define a new, special second order cone as a type-k second order cone. We focus on the case of k=2, which can be viewed as a second order conic optimization (SOCO) problem with an additional complicating variable. For this new problem, we develop the necessary prerequisites, based on previous work for traditional SOCO problem. We then develop a primal–dual interior point algorithm for solving a type-2 second order conic optimization problem, based on a family of kernel functions suitable for this type-2 SOCO. We finally derive a new iteration bound for our framework.
- Published
- 2021
- Full Text
- View/download PDF
11. Mirror Descent and Convex Optimization Problems with Non-smooth Inequality Constraints
- Author
-
Bayandina, Anastasia, Dvurechensky, Pavel, Gasnikov, Alexander, Stonyakin, Fedor, Titov, Alexander, Morel, Jean-Michel, Editor-in-Chief, Teissier, Bernard, Editor-in-Chief, Brion, Michel, Series Editor, De Lellis, Camillo, Series Editor, Figalli, Alessio, Series Editor, Khoshnevisan, Davar, Series Editor, Kontoyiannis, Ioannis, Series Editor, Lugosi, Gábor, Series Editor, Podolskij, Mark, Series Editor, Serfaty, Sylvia, Series Editor, Wienhard, Anna, Series Editor, Giselsson, Pontus, editor, and Rantzer, Anders, editor
- Published
- 2018
- Full Text
- View/download PDF
12. Primal–dual accelerated gradient methods with small-dimensional relaxation oracle.
- Author
-
Nesterov, Yurii, Gasnikov, Alexander, Guminov, Sergey, and Dvurechensky, Pavel
- Abstract
In this paper, a new variant of accelerated gradient descent is proposed. The proposed method does not require any information about the objective function, uses exact line search for the practical accelerations of convergence, converges according to the well-known lower bounds for both convex and non-convex objective functions, possesses primal–dual properties and can be applied in the non-euclidian set-up. As far as we know this is the first such method possessing all of the above properties at the same time. We also present a universal version of the method which is applicable to non-smooth problems. We demonstrate how in practice one can efficiently use the combination of line-search and primal-duality by considering a convex optimization problem with a simple structure (for example, linearly constrained). [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
13. A globally convergent stabilized sqp method
- Author
-
Gill, PE and Robinson, DP
- Subjects
nonlinear programming ,nonlinear constraints ,augmented Lagrangian ,sequential quadratic programming ,SQP methods ,stabilized SQP ,regularized methods ,primal-dual methods ,Operations Research ,Applied Mathematics ,Numerical and Computational Mathematics - Abstract
Sequential quadratic programming (SQP) methods are a popular class of methods for nonlinearly constrained optimization. They are particularly effective for solving a sequence of related problems, such as those arising in mixed-integer nonlinear programming and the optimization of functions subject to differential equation constraints. Recently, there has been considerable interest in the formulation of stabilized SQP methods, which are specifically designed to handle degenerate optimization problems. Existing stabilized SQP methods are essentially local in the sense that both the formulation and analysis focus on the properties of the methods in a neighborhood of a solution. A new SQP method is proposed that has favorable global convergence properties yet, under suitable assumptions, is equivalent to a variant of the conventional stabilized SQP method in the neighborhood of a solution. The method combines a primal-dual generalized augmented Lagrangian function with a flexible line search to obtain a sequence of improving estimates of the solution. The method incorporates a convexification algorithm that allows the use of exact second derivatives to define a convex quadratic programming (QP) subproblem without requiring that the Hessian of the Lagrangian be positive definite in the neighborhood of a solution. This gives the potential for fast convergence in the neighborhood of a solution. Additional benefits of the method are that each QP subproblem is regularized and the QP subproblem always has a known feasible point. Numerical experiments are presented for a subset of the problems from the CUTEr test collection. © 2013 Society for Industrial and Applied Mathematics.
- Published
- 2013
14. Complexity and Applications of the Homotopy Principle for Uniformly Constrained Sparse Minimization.
- Author
-
Brauer, Christoph and Lorenz, Dirk A.
- Subjects
- *
FISHER discriminant analysis , *NONSMOOTH optimization - Abstract
In this paper, we investigate the homotopy path related to ℓ 1 -norm minimization problems with ℓ ∞ -norm constraints. We establish an enhanced upper bound on the number of linear segments in the path and provide an example showing that the number of segments is exponential in the number of variables in the worst case. We also use the homotopy framework to develop grid independent (cross-)validation schemes for sparse linear discriminant analysis and classification that make use of the entire path. Several numerical and statistical examples illustrate the applicability of the framework. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
15. Primal–Dual Methods for Large-Scale and Distributed Convex Optimization and Data Analytics.
- Author
-
Jakovetic, Dusan, Bajovic, Dragana, Xavier, Joao, and Moura, Jose M. F.
- Subjects
CONSTRAINT satisfaction ,DISTRIBUTED algorithms ,COMPUTER architecture ,LINEAR programming ,COMPLEXITY (Philosophy) ,CONVEX functions - Abstract
The augmented Lagrangian method (ALM) is a classical optimization tool that solves a given “difficult” (constrained) problem via finding solutions of a sequence of “easier” (often unconstrained) subproblems with respect to the original (primal) variable, wherein constraints satisfaction is controlled via the so-called dual variables. ALM is highly flexible with respect to how primal subproblems can be solved, giving rise to a plethora of different primal–dual methods. The powerful ALM mechanism has recently proved to be very successful in various large-scale and distributed applications. In addition, several significant advances have appeared, primarily on precise complexity results with respect to computational and communication costs in the presence of inexact updates and design and analysis of novel optimal methods for distributed consensus optimization. We provide a tutorial-style introduction to ALM and its variants for solving convex optimization problems in large-scale and distributed settings. We describe control-theoretic tools for the algorithms’ analysis and design, survey recent results, and provide novel insights into the context of two emerging applications: federated learning and distributed energy trading. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
16. An Efficient Parameterized Logarithmic Kernel Function for Semidefinite Optimization.
- Author
-
Derbal, Louiza and Kebbiche, Zakia
- Abstract
In this paper, we present a primal-dual interior point algorithm for semidefinite optimization problems based on a new class of kernel functions. These functions constitute a combination of the classic kernel function and a barrier term. We derive the complexity bounds for large and small-update methods respectively. We show that the best result of iteration bounds for large and small-update methods can be achieved, namely O (q n (log n) q + 1 q log n ε) for large-update methods and O (q 3 2 (log n) q + 1 q n log n ε) for small-update methods. We test the efficiency and the validity of our algorithm by running some computational tests, then we compare our numerical results with results obtained by algorithms based on different kernel functions. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
17. Primal-Dual Path-Following Methods For Nonlinear Programming
- Author
-
Su, Fangyao
- Subjects
Mathematics ,Augmented Lagrangian method ,Barrier methods ,Interior methods ,Nonlinear optimization ,Path-following methods ,Primal-dual methods - Abstract
The main goal of this dissertation is to study the formulation and analysis of primal-dual path-following methods for nonlinear programming (NLP), which involves the minimization or maximization of a nonlinear objective function subject to constraints on the variables. Two important types of nonlinear program are problems with nonlinear equality constraints and problems with nonlinear inequality constraints. In this dissertation, two new methods are proposed for nonlinear programming. The first is a new primal-dual path-following augmented Lagrangian method (PDAL) for solving a nonlinear program with equality constraints only. The second is a new primal-dual path-following shifted penalty-barrier method (PDPB) for solving a nonlinear program with a mixture of equality and inequality constraints. The method of PDPB may be regarded as an extension of PDAL to handle nonlinear inequality constraints.Algorithms PDAL and PDPB are iterative methods that share the same "two-level" structure involving outer and inner iterations. In the outer iteration of PDAL, the optimality conditions are perturbed to define a "path-following trajectory'' parameterized by a set of Lagrange multiplier estimates and a penalty parameter. The iterates are constructed to closely follow the trajectory towards a constrained local minimizer of the nonlinear program. If an outer iterate deviates significantly from the trajectory, then an inner iteration is invoked in which a primal-dual augmented Lagrangian merit function is minimized to force the iterates back to a neighborhood of the trajectory.A similar approach is used to handle the inequality constraints in PDPB. In this case, the trajectory is followed towards a local solution of the mixed-constraint nonlinear program. This trajectory is parameterized by a set of Lagrange multiplier estimates and penalty and barrier parameters associated with the equality and inequality constraints. If an iterate moves away from the trajectory, a primal-dual shifted penalty-barrier merit function is minimized using a trust-region method. By introducing slack variables, global convergence can be achieved from any starting point without the need for an initial strictly feasible point. Furthermore, numerical experiments indicate that when minimizing the shifted barrier function, the trust-region method requires fewer matrix factorizations and iterations than a comparable line-search method.
- Published
- 2019
18. A Proximal Diffusion Strategy for Multiagent Optimization With Sparse Affine Constraints.
- Author
-
Alghunaim, Sulaiman A., Yuan, Kun, and Sayed, Ali H.
- Subjects
- *
SMART power grids , *DIFFUSION , *ALGORITHMS , *RESOURCE allocation , *COST functions - Abstract
This article develops a proximal primal–dual decentralized strategy for multiagent optimization problems that involve multiple coupled affine constraints, where each constraint may involve only a subset of the agents. The constraints are generally sparse, meaning that only a small subset of the agents are involved in them. This scenario arises in many applications, including decentralized control formulations, resource allocation problems, and smart grids. Traditional decentralized solutions tend to ignore the structure of the constraints and lead to degraded performance. We instead develop a decentralized solution that exploits the sparsity structure. Under constant step-size learning, the asymptotic convergence of the proposed algorithm is established in the presence of nonsmooth terms, and it occurs at a linear rate in the smooth case. We also examine how the performance of the algorithm is influenced by the sparsity of the constraints. Simulations illustrate the superior performance of the proposed strategy. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
19. Rapid infeasibility detection in a mixed logarithmic barrier-augmented Lagrangian method for nonlinear optimization.
- Author
-
Armand, Paul and Tran, Ngoc Nguyen
- Subjects
- *
INTERIOR-point methods , *LAGRANGIAN functions - Abstract
We present a modification of a primal-dual algorithm based on a mixed augmented Lagrangian and a log-barrier penalty function. The goal of this new feature is to quickly detect infeasibility. An additional parameter is introduced to balance the minimization of the objective function and the realization of the constraints. The global convergence of the modified algorithm is analysed under mild assumptions. We also show that under a suitable choice of the parameters along the iterations, the rate of convergence of the algorithm to an infeasible stationary point is superlinear. This is the first local convergence result for the class of interior point methods in the infeasible case. We finally report some numerical experiments to show that this new algorithm is quite efficient to detect infeasibility and does not deteriorate the overall behavior in the general case. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
20. Online Primal-Dual Methods With Measurement Feedback for Time-Varying Convex Optimization.
- Author
-
Bernstein, Andrey, Dall'Anese, Emiliano, and Simonetto, Andrea
- Subjects
- *
ONLINE algorithms , *LAGRANGIAN functions , *HEURISTIC algorithms , *CONVEX functions - Abstract
This paper addresses the design and analysis of feedback-based online algorithms to control systems or networked systems based on performance objectives and engineering constraints that may evolve over time. The emerging time-varying convex optimization formalism is leveraged to model optimal operational trajectories of the systems, as well as explicit local and network-level operational constraints. Departing from existing batch and feed-forward optimization approaches, the design of the algorithms capitalizes on an online implementation of primal-dual projected-gradient methods; the gradient steps are, however, suitably modified to accommodate feedback from the system in the form of measurements, hence, the term “online optimization with feedback.” By virtue of this approach, the resultant algorithms can cope with model mismatches in the algebraic representation of the system states and outputs, they avoid pervasive measurements of exogenous inputs, and they naturally lend themselves to a distributed implementation. Under suitable assumptions, analytical convergence claims are established in terms of dynamic regret. Furthermore, when the synthesis of the feedback-based online algorithms is based on a regularized Lagrangian function, $\boldsymbol{Q}$ -linear convergence to solutions of the time-varying optimization problem is shown. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
21. A primal-dual interior-point method based on various selections of displacement step for symmetric optimization.
- Author
-
Alzalg, Baha
- Subjects
INTERIOR-point methods ,MATHEMATICAL optimization ,ALGORITHMS ,COMPUTER programming ,PROBLEM solving - Abstract
In this paper, we develop a primal-dual central trajectory interior-point algorithm for symmetric programming problems and establish its complexity analysis. The main contribution of the paper is that it uniquely equips the central trajectory algorithm with various selections of the displacement step while solving symmetric programming. To show the efficiency of the proposed algorithm, these selections of calculating the displacement step are compared in numerical examples for second-order cone programming, which is a special case of symmetric programming. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
22. The complexity of primal-dual fixed point methods for ridge regression.
- Author
-
Ribeiro, Ademir Alves and Richtárik, Peter
- Subjects
- *
FIXED point theory , *REGRESSION analysis , *MATHEMATICAL regularization , *PARAMETERS (Statistics) , *MATHEMATICAL bounds , *LINEAR systems - Abstract
We study the ridge regression ( L 2 regularized least squares) problem and its dual, which is also a ridge regression problem. We observe that the optimality conditions describing the primal and dual optimal solutions can be formulated in several different but equivalent ways. The optimality conditions we identify form a linear system involving a structured matrix depending on a single relaxation parameter which we introduce for regularization purposes. This leads to the idea of studying and comparing, in theory and practice, the performance of the fixed point method applied to these reformulations. We compute the optimal relaxation parameters and uncover interesting connections between the complexity bounds of the variants of the fixed point scheme we consider. These connections follow from a close link between the spectral properties of the associated matrices. For instance, some reformulations involve purely imaginary eigenvalues; some involve real eigenvalues and others have all eigenvalues on the complex circle. We show that the deterministic Quartz method—which is a special case of the randomized dual coordinate ascent method with arbitrary sampling recently developed by Qu, Richtárik and Zhang—can be cast in our framework, and achieves the best rate in theory and in numerical experiments among the fixed point methods we study. Remarkably, the method achieves an accelerated convergence rate. Numerical experiments indicate that our main algorithm is competitive with the conjugate gradient method. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
23. A primal-dual homotopy algorithm for ℓ1-minimization with ℓ∞-constraints.
- Author
-
Brauer, Christoph, Lorenz, Dirk A., and Tillmann, Andreas M.
- Subjects
HOMOTOPY theory ,SPARSE approximations ,LINEAR equations ,LINEAR programming ,ITERATIVE methods (Mathematics) - Abstract
In this paper we propose a primal-dual homotopy method for ℓ1
-minimization problems with infinity norm constraints in the context of sparse reconstruction. The natural homotopy parameter is the value of the bound for the constraints and we show that there exists a piecewise linear solution path with finitely many break points for the primal problem and a respective piecewise constant path for the dual problem. We show that by solving a small linear program, one can jump to the next primal break point and then, solving another small linear program, a new optimal dual solution is calculated which enables the next such jump in the subsequent iteration. Using a theorem of the alternative, we show that the method never gets stuck and indeed calculates the whole path in a finite number of steps. Numerical experiments demonstrate the effectiveness of our algorithm. In many cases, our method significantly outperforms commercial LP solvers; this is possible since our approach employs a sequence of considerably simpler auxiliary linear programs that can be solved efficiently with specialized active-set strategies. [ABSTRACT FROM AUTHOR] - Published
- 2018
- Full Text
- View/download PDF
24. An interior-point method-based solver for simulation of aircraft parts riveting.
- Author
-
Stefanova, Maria, Yakunin, Sergey, Petukhova, Margarita, Lupuleac, Sergey, and Kokkolaras, Michael
- Subjects
- *
PROBLEM solving , *RIVETS & riveting , *LINEAR equations , *POLYNOMIAL approximation , *SIMULATION methods & models - Abstract
The particularities of the aircraft parts riveting process simulation necessitate the solution of a large amount of contact problems. A primal-dual interior-point method-based solver is proposed for solving such problems efficiently. The proposed method features a worst case polynomial complexity bound
on the number of iterations, where n is the dimension of the problem andε is a threshold related to desired accuracy. In practice, the convergence is often faster than this worst case bound, which makes the method applicable to large-scale problems. The computational challenge is solving the system of linear equations because the associated matrix is ill conditioned. To that end, the authors introduce a preconditioner and a strategy for determining effective initial guesses based on the physics of the problem. Numerical results are compared with ones obtained using the Goldfarb-Idnani algorithm. The results demonstrate the efficiency of the proposed method. [ABSTRACT FROM AUTHOR]- Published
- 2018
- Full Text
- View/download PDF
25. A new accelerated algorithm for ill-conditioned ridge regression problems.
- Author
-
Silva, Tatiane C., Ribeiro, Ademir A., and Periçaro, Gislaine A.
- Subjects
LEAST squares ,REGRESSION analysis ,FIXED point theory ,STOCHASTIC convergence ,ITERATIVE methods (Mathematics) - Abstract
In this work, we study the primal and dual formulations of the regularized least squares problem, in special norm L2
, named ridge regression. We state the Gradient method for the corresponding primal-dual problem through a fixed point approach. For this formulation, we present an original convergence analysis involving the spectral radius of a suitable iteration matrix. The main contribution of this work is to accelerate the studied method by means of a memory-like strategy. Some connections with the spectral properties of the associated iteration matrices were established to prove the convergence of the accelerated method. Preliminary experiments showed the good performance of the proposed algorithm when applied to solve ill-conditioned problems, providing better results than the conjugate gradient method for this class of problems. [ABSTRACT FROM AUTHOR] - Published
- 2018
- Full Text
- View/download PDF
26. Primal and dual predicted decrease approximation methods.
- Author
-
Beck, Amir, Pauwels, Edouard, and Sabach, Shoham
- Subjects
- *
SUPPORT vector machines , *APPROXIMATION theory , *MATHEMATICAL models , *SIMULATION methods & models , *ALGORITHMS - Abstract
We introduce the notion of predicted decrease approximation (PDA) for constrained convex optimization, a flexible framework which includes as special cases known algorithms such as generalized conditional gradient, proximal gradient, greedy coordinate descent for separable constraints and working set methods for linear equality constraints with bounds. The new scheme allows the development of a unified convergence analysis for these methods. We further consider a partially strongly convex nonsmooth model and show that dual application of PDA-based methods yields new sublinear convergence rate estimates in terms of both primal and dual objectives. As an example of an application, we provide an explicit working set selection rule for SMO-type methods for training the support vector machine with an improved primal convergence analysis. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
27. Distributed Optimization with Coupling Constraints
- Author
-
Wu, Xuyang, Wang, He, Lu, Jie, Wu, Xuyang, Wang, He, and Lu, Jie
- Abstract
In this paper, we investigate distributed convex optimization with both inequality and equality constraints, where the objective function can be a general nonsmooth convex function and all the constraints can be both sparsely and densely coupling. By strategically integrating ideas from primal-dual, proximal, and virtual-queue optimization methods, we develop a novel distributed algorithm, referred to as IPLUX, to address the problem over a connected, undirected graph. We show that IPLUX achieves an
rate of convergence in terms of optimality and feasibility, which is stronger than the convergence results of the alternative methods and eliminates the standard assumption on the compactness of the feasible region. Finally, IPLUX exhibits faster convergence and higher efficiency than several state-of-the-art methods in the simulation., QC 20230508$O(1/k)$ - Published
- 2022
- Full Text
- View/download PDF
28. Approximation Algorithms for Partial Covering Problems : Extended Abstract
- Author
-
Gandhi, Rajiv, Khuller, Samir, Srinivasan, Aravind, Goos, Gerhard, editor, Hartmanis, Juris, editor, van Leeuwen, Jan, editor, Orejas, Fernando, editor, and Spirakis, Paul G., editor
- Published
- 2001
- Full Text
- View/download PDF
29. Dual approaches to the minimization of strongly convex functionals with a simple structure under affine constraints.
- Author
-
Anikin, A., Gasnikov, A., Dvurechensky, P., Tyurin, A., and Chernov, A.
- Subjects
- *
CONVEX functions , *CONJUGATE gradient methods , *AFFINE transformations , *CONVEX domains , *NUMERICAL calculations , *MATHEMATICAL analysis - Abstract
A strongly convex function of simple structure (for example, separable) is minimized under affine constraints. A dual problem is constructed and solved by applying a fast gradient method. The necessary properties of this method are established relying on which, under rather general conditions, the solution of the primal problem can be recovered with the same accuracy as the dual solution from the sequence generated by this method in the dual space of the problem. Although this approach seems natural, some previously unpublished rather subtle results necessary for its rigorous and complete theoretical substantiation in the required generality are presented. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
30. A Mixed Logarithmic Barrier-Augmented Lagrangian Method for Nonlinear Optimization.
- Author
-
Armand, Paul and Omheni, Riadh
- Subjects
- *
LAGRANGIAN functions , *NONLINEAR analysis , *MATHEMATICAL optimization , *PERTURBATION theory , *MATHEMATICAL inequalities , *MATHEMATICAL sequences - Abstract
We present a primal-dual algorithm for solving a constrained optimization problem. This method is based on a Newtonian method applied to a sequence of perturbed KKT systems. These systems follow from a reformulation of the initial problem under the form of a sequence of penalized problems, by introducing an augmented Lagrangian for handling the equality constraints and a log-barrier penalty for the inequalities. We detail the updating rules for monitoring the different parameters (Lagrange multiplier estimate, quadratic penalty and log-barrier parameter), in order to get strong global convergence properties. We show that one advantage of this approach is that it introduces a natural regularization of the linear system to solve at each iteration, for the solution of a problem with a rank deficient Jacobian of constraints. The numerical experiments show the good practical performances of the proposed method especially for degenerate problems. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
31. A stabilized SQP method: superlinear convergence.
- Author
-
Gill, Philip, Kungurtsev, Vyacheslav, and Robinson, Daniel
- Subjects
- *
STOCHASTIC convergence , *NONLINEAR programming , *MATHEMATICAL programming , *DYNAMIC programming , *INTEGER programming - Abstract
Stabilized sequential quadratic programming (sSQP) methods for nonlinear optimization generate a sequence of iterates with fast local convergence regardless of whether or not the active-constraint gradients are linearly dependent. This paper concerns the local convergence analysis of an sSQP method that uses a line search with a primal-dual augmented Lagrangian merit function to enforce global convergence. The method is provably well-defined and is based on solving a strictly convex quadratic programming subproblem at each iteration. It is shown that the method has superlinear local convergence under assumptions that are no stronger than those required by conventional stabilized SQP methods. The fast local convergence is obtained by allowing a small relaxation of the optimality conditions for the quadratic programming subproblem in the neighborhood of a solution. In the limit, the line search selects the unit step length, which implies that the method does not suffer from the Maratos effect. The analysis indicates that the method has the same strong first- and second-order global convergence properties that have been established for augmented Lagrangian methods, yet is able to transition seamlessly to sSQP with fast local convergence in the neighborhood of a solution. Numerical results on some degenerate problems are reported. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
32. A stabilized SQP method: global convergence.
- Author
-
GILL, PHILIP E., KUNGURTSEV, VYACHESLAV, and ROBINSON, DANIEL P.
- Subjects
QUADRATIC programming ,LAGRANGIAN functions ,STOCHASTIC convergence ,NONLINEAR theories ,HESSIAN matrices - Abstract
Stabilized sequential quadratic programming (SQP) methods for nonlinear optimization are designed to provide a sequence of iterates with fast local convergence even when the active-constraint gradients are linearly dependent. This paper concerns the global convergence properties of a stabilized SQP method with a primal-dual augmented Lagrangian merit function. The proposed method incorporates two novel features. First, a flexible line search is used based on a direction formed from an approximate solution of a strictly convex quadratic programming (QP) subproblem and, when one exists, a direction of negative curvature for the primal-dual merit function. Second, when certain conditions hold, an approximate QP solution is computed by solving a single linear system defined in terms of an estimate of the optimal active set. We also establish two desirable convergence results. (i) It is shown that with an appropriate choice of termination condition, the method terminates in a finite number of iterations without the assumption of a constraint qualification. The method may be interpreted as an SQP method with an augmented Lagrangian safeguarding strategy. This safeguarding becomes relevant only when the iterates are converging to an infeasible stationary point of the norm of the constraint violations. Otherwise, the method terminates with a point that approximately satisfies certain second-order necessary conditions for optimality. In this situation, if all termination conditions are removed, then the limit points either satisfy the same secondorder necessary conditions exactly or fail to satisfy a weak second-order constraint qualification. (ii) The global convergence analysis concerns a specific algorithm that estimates the least curvature of the merit function at each step. If negative curvature directions are omitted, the analysis still applies and establishes convergence to either first-order solutions or infeasible stationary points. The superlinear convergence of the iterates and the formal local equivalence to stabilized SQP is established in a companion paper (Report CCoM 14-01, Center for Computational Mathematics, University of California, San Diego, 2014). [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
33. Distributed Semidefinite Programming With Application to Large-Scale System Analysis.
- Author
-
Pakazad, Sina Khoshfetrat, Hansson, Anders, Andersen, Martin S., and Rantzer, Anders
- Subjects
- *
DISTRIBUTED algorithms , *PARALLEL algorithms , *UNCERTAIN systems , *STOCHASTIC processes , *SYSTEM analysis - Abstract
Distributed algorithms for solving coupled semidefinite programs commonly require many iterations to converge. They also put high computational demand on the computational agents. In this paper, we show that in case the coupled problem has an inherent tree structure, it is possible to devise an efficient distributed algorithm for solving such problems. The proposed algorithm relies on predictor–corrector primal-dual interior-point methods, where we use a message-passing algorithm to compute the search directions distributedly. Message passing here is closely related to dynamic programming over trees. This allows us to compute the exact search directions in a finite number of steps. This is because computing the search directions requires a recursion over the tree structure and, hence, terminates after an upward and downward pass through the tree. Furthermore, this number can be computed a priori and only depends on the coupling structure of the problem. We use the proposed algorithm for analyzing robustness of large-scale uncertain systems distributedly. We test the performance of this algorithm using numerical examples. [ABSTRACT FROM PUBLISHER]
- Published
- 2018
- Full Text
- View/download PDF
34. A primal-dual penalty method via rounded weighted-ℓ1 Lagrangian duality
- Author
-
Regina S. Burachik, C. Y. Kaya, C. J. Price, Burachik, RS, Kaya, CY, and Price, CJ
- Subjects
Sequence ,021103 operations research ,Control and Optimization ,kissing number problem ,Applied Mathematics ,MathematicsofComputing_NUMERICALANALYSIS ,0211 other engineering and technologies ,Duality (optimization) ,02 engineering and technology ,Management Science and Operations Research ,Lagrangian duality ,01 natural sciences ,Markov-Dubins problem ,Primal dual ,010101 applied mathematics ,Scheme (mathematics) ,penalty function methods ,Applied mathematics ,l(1)-penalty function ,Penalty method ,0101 mathematics ,Kissing number problem ,primal-dual methods ,Mathematics - Abstract
We propose a new duality scheme based on a sequence of smooth minorants of the weighted-l1 penalty func- tion, interpreted as a parametrized sequence of augmented Lagrangians, to solve non-convex constrained optimization problems. For the induced sequence of dual problems, we establish strong asymptotic duality properties. Namely, we show that (i) the sequence of dual problems is convex and (ii) the dual values monotonically increase to the optimal primal value. We use these properties to devise a subgradient based primal–dual method, and show that the generated primal sequence accumulates at a solution of the original problem. We illustrate the performance of the new method with three different types of test problems: A polynomial non-convex problem, large-scale instances of the celebrated kissing num- ber problem, and the Markov–Dubins problem. Our numerical experiments demonstrate that, when compared with the tra- ditional implementation of a well-known smooth solver, our new method (using the same well-known solver in its sub- problem) can find better quality solutions, i.e. ‘deeper’ local minima, or solutions closer to the global minimum. Moreover, our method seems to be more time efficient, especially when the problem has a large number of constraints. Refereed/Peer-reviewed
- Published
- 2022
35. Decentralized Algorithms for Wasserstein Barycenters
- Author
-
Dvinskikh, Darina, Spokoiny, Vladimir, Schmitzer, Bernhard, and Niles-Weed, Jonathan
- Subjects
ddc:519 ,optimaler Transport ,Wasserstein Baryzentrum ,519 Wahrscheinlichkeiten und angewandte Mathematik ,Orakel erster Ordnung ,510 Mathematik ,stochastic optimization ,SK 800 ,Wasserstein barycenter ,optimal transport ,dezentrale Optimierung ,stochastische Optimierung ,primal-duale Optimierungsmethoden erster Ordnung ,ddc:510 ,first-order oracle ,decentralized optimization ,distributed optimization ,primal-dual methods - Abstract
In dieser Arbeit beschäftigen wir uns mit dem Wasserstein Baryzentrumproblem diskreter Wahrscheinlichkeitsmaße sowie mit dem population Wasserstein Baryzentrumproblem gegeben von a Fréchet Mittelwerts von der rechnerischen und statistischen Seiten. Der statistische Fokus liegt auf der Schätzung der Stichprobengröße von Maßen zur Berechnung einer Annäherung des Fréchet Mittelwerts (Baryzentrum) der Wahrscheinlichkeitsmaße mit einer bestimmten Genauigkeit. Für empirische Risikominimierung (ERM) wird auch die Frage der Regularisierung untersucht zusammen mit dem Vorschlag einer neuen Regularisierung, die zu den besseren Komplexitätsgrenzen im Vergleich zur quadratischen Regularisierung beiträgt. Der Rechenfokus liegt auf der Entwicklung von dezentralen Algorithmen zurBerechnung von Wasserstein Baryzentrum: duale Algorithmen und Sattelpunktalgorithmen. Die Motivation für duale Optimierungsmethoden ist geschlossene Formen für die duale Formulierung von entropie-regulierten Wasserstein Distanz und ihren Derivaten, während, die primale Formulierung nur in einigen Fällen einen Ausdruck in geschlossener Form hat, z.B. für Gaußsches Maß. Außerdem kann das duale Orakel, das den Gradienten der dualen Darstellung für die entropie-regulierte Wasserstein Distanz zurückgibt, zu einem günstigeren Preis berechnet werden als das primale Orakel, das den Gradienten der (entropie-regulierten) Wasserstein Distanz zurückgibt. Die Anzahl der dualen Orakel rufe ist in diesem Fall ebenfalls weniger, nämlich die Quadratwurzel der Anzahl der primalen Orakelrufe. Im Gegensatz zum primalen Zielfunktion, hat das duale Zielfunktion Lipschitz-stetig Gradient aufgrund der starken Konvexität regulierter Wasserstein Distanz. Außerdem untersuchen wir die Sattelpunktformulierung des (nicht regulierten) Wasserstein Baryzentrum, die zum Bilinearsattelpunktproblem führt. Dieser Ansatz ermöglicht es uns auch, optimale Komplexitätsgrenzen zu erhalten, und kann einfach in einer dezentralen Weise präsentiert werden., In this thesis, we consider the Wasserstein barycenter problem of discrete probability measures as well as the population Wasserstein barycenter problem given by a Fréchet mean from computational and statistical sides. The statistical focus is estimating the sample size of measures needed to calculate an approximation of a Fréchet mean (barycenter) of probability distributions with a given precision. For empirical risk minimization approaches, the question of the regularization is also studied along with proposing a new regularization which contributes to the better complexity bounds in comparison with the quadratic regularization. The computational focus is developing decentralized algorithms for calculating Wasserstein barycenters: dual algorithms and saddle point algorithms. The motivation for dual approaches is closed-forms for the dual formulation of entropy-regularized Wasserstein distances and their derivatives, whereas the primal formulation has a closed-form expression only in some cases, e.g., for Gaussian measures.Moreover, the dual oracle returning the gradient of the dual representation forentropy-regularized Wasserstein distance can be computed for a cheaper price in comparison with the primal oracle returning the gradient of the (entropy-regularized) Wasserstein distance. The number of dual oracle calls in this case will be also less, i.e., the square root of the number of primal oracle calls. Furthermore, in contrast to the primal objective, the dual objective has Lipschitz continuous gradient due to the strong convexity of regularized Wasserstein distances. Moreover, we study saddle-point formulation of the non-regularized Wasserstein barycenter problem which leads to the bilinear saddle-point problem. This approach also allows us to get optimal complexity bounds and it can be easily presented in a decentralized setup.
- Published
- 2021
- Full Text
- View/download PDF
36. A modified convex variational model for multiplicative noise removal.
- Author
-
Liu, Min and Fan, Qibin
- Subjects
- *
NOISE control , *ITERATIVE methods (Mathematics) , *DIVERGENCE theorem , *ALGORITHMS , *CONVEX domains - Abstract
In this paper, a convex variational model for multiplicative noise removal is studied. Accelerating primal–dual method and proximal linearized alternating direction method are also discussed. An improved primal–dual method is proposed. Algorithms above produce more desired results than primal–dual algorithm when we solve the convex variational model. Inspired by the statistical property of the Gamma multiplicative noise and I-divergence, a modified convex variational model is proposed, for which the uniqueness of solution is also provided. Moreover, the property of the solution is presented. Without inner iterations, primal–dual method is efficient to the modified model, and running time can be reduced dramatically also with good restoration. When we set parameter α to 0, the convex variational model we proposed turns into the model in Steidl and Teuber (2010). By altering α , our model can be used for different noise level. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
37. Decentralized Algorithms for Wasserstein Barycenters
- Author
-
Spokoiny, Vladimir, Schmitzer, Bernhard, Niles-Weed, Jonathan, Dvinskikh, Darina, Spokoiny, Vladimir, Schmitzer, Bernhard, Niles-Weed, Jonathan, and Dvinskikh, Darina
- Abstract
In dieser Arbeit beschäftigen wir uns mit dem Wasserstein Baryzentrumproblem diskreter Wahrscheinlichkeitsmaße sowie mit dem population Wasserstein Baryzentrumproblem gegeben von a Fréchet Mittelwerts von der rechnerischen und statistischen Seiten. Der statistische Fokus liegt auf der Schätzung der Stichprobengröße von Maßen zur Berechnung einer Annäherung des Fréchet Mittelwerts (Baryzentrum) der Wahrscheinlichkeitsmaße mit einer bestimmten Genauigkeit. Für empirische Risikominimierung (ERM) wird auch die Frage der Regularisierung untersucht zusammen mit dem Vorschlag einer neuen Regularisierung, die zu den besseren Komplexitätsgrenzen im Vergleich zur quadratischen Regularisierung beiträgt. Der Rechenfokus liegt auf der Entwicklung von dezentralen Algorithmen zurBerechnung von Wasserstein Baryzentrum: duale Algorithmen und Sattelpunktalgorithmen. Die Motivation für duale Optimierungsmethoden ist geschlossene Formen für die duale Formulierung von entropie-regulierten Wasserstein Distanz und ihren Derivaten, während, die primale Formulierung nur in einigen Fällen einen Ausdruck in geschlossener Form hat, z.B. für Gaußsches Maß. Außerdem kann das duale Orakel, das den Gradienten der dualen Darstellung für die entropie-regulierte Wasserstein Distanz zurückgibt, zu einem günstigeren Preis berechnet werden als das primale Orakel, das den Gradienten der (entropie-regulierten) Wasserstein Distanz zurückgibt. Die Anzahl der dualen Orakel rufe ist in diesem Fall ebenfalls weniger, nämlich die Quadratwurzel der Anzahl der primalen Orakelrufe. Im Gegensatz zum primalen Zielfunktion, hat das duale Zielfunktion Lipschitz-stetig Gradient aufgrund der starken Konvexität regulierter Wasserstein Distanz. Außerdem untersuchen wir die Sattelpunktformulierung des (nicht regulierten) Wasserstein Baryzentrum, die zum Bilinearsattelpunktproblem führt. Dieser Ansatz ermöglicht es uns auch, optimale Komplexitätsgrenzen zu erhalten, und kann einfach in einer dezentralen Weise präsentiert we, In this thesis, we consider the Wasserstein barycenter problem of discrete probability measures as well as the population Wasserstein barycenter problem given by a Fréchet mean from computational and statistical sides. The statistical focus is estimating the sample size of measures needed to calculate an approximation of a Fréchet mean (barycenter) of probability distributions with a given precision. For empirical risk minimization approaches, the question of the regularization is also studied along with proposing a new regularization which contributes to the better complexity bounds in comparison with the quadratic regularization. The computational focus is developing decentralized algorithms for calculating Wasserstein barycenters: dual algorithms and saddle point algorithms. The motivation for dual approaches is closed-forms for the dual formulation of entropy-regularized Wasserstein distances and their derivatives, whereas the primal formulation has a closed-form expression only in some cases, e.g., for Gaussian measures.Moreover, the dual oracle returning the gradient of the dual representation forentropy-regularized Wasserstein distance can be computed for a cheaper price in comparison with the primal oracle returning the gradient of the (entropy-regularized) Wasserstein distance. The number of dual oracle calls in this case will be also less, i.e., the square root of the number of primal oracle calls. Furthermore, in contrast to the primal objective, the dual objective has Lipschitz continuous gradient due to the strong convexity of regularized Wasserstein distances. Moreover, we study saddle-point formulation of the non-regularized Wasserstein barycenter problem which leads to the bilinear saddle-point problem. This approach also allows us to get optimal complexity bounds and it can be easily presented in a decentralized setup.
- Published
- 2021
38. Online Learning of Feasible Strategies in Unknown Environments.
- Author
-
Paternain, Santiago and Ribeiro, Alejandro
- Subjects
- *
DISTANCE education , *COST functions , *LINEAR programming , *CONVEX functions , *SADDLEPOINT approximations , *OPTIMAL control theory - Abstract
Define an environment as a set of convex constraint functions that vary arbitrarily over time and consider a cost function that is also convex and arbitrarily varying. Agents that operate in this environment intend to select actions that are feasible for all times while minimizing the cost's time average. Such action is said optimal and can be computed offline if the cost and the environment are known a priori. An online policy is one that depends causally on the cost and the environment. To compare online policies to the optimal offline action define the fit of a trajectory as a vector that integrates the constraint violations over time and its regret as the cost difference with the optimal action accumulated over time. Fit measures the extent to which an online policy succeeds in learning feasible actions while regret measures its success in learning optimal actions. This paper proposes the use of online policies computed from a saddle point controller which are shown to have fit and regret that are either bounded or grow at a sublinear rate. These properties provide an indication that the controller finds trajectories that are feasible and optimal in a relaxed sense. Concepts are illustrated throughout with the problem of a shepherd that wants to stay close to all sheep in a herd. Numerical experiments show that the saddle point controller allows the shepherd to do so. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
39. Quasi-monotone Subgradient Methods for Nonsmooth Convex Minimization.
- Author
-
Nesterov, Yu. and Shikhman, V.
- Subjects
- *
MATHEMATICAL optimization , *NONSMOOTH optimization , *SUBGRADIENT methods , *CONVEX functions , *ITERATIVE methods (Mathematics) - Abstract
In this paper, we develop new subgradient methods for solving nonsmooth convex optimization problems. These methods guarantee the best possible rate of convergence for the whole sequence of test points. Our methods are applicable as efficient real-time stabilization tools for potential systems with infinite horizon. Preliminary numerical experiments confirm a high efficiency of the new schemes. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
40. Stability and Performance Limits of Adaptive Primal-Dual Networks.
- Author
-
Towfic, Zaid J. and Sayed, Ali H.
- Subjects
- *
STREAMING technology , *LAGRANGIAN functions , *DIFFUSION of innovations , *STOCHASTIC analysis , *MATHEMATICAL optimization - Abstract
This paper studies distributed primal-dual strategies for adaptation and learning over networks from streaming data. Two first-order methods are considered based on the Arrow-Hurwicz (AH) and augmented Lagrangian (AL) techniques. Several revealing results are discovered in relation to the performance and stability of these strategies when employed over adaptive networks. The conclusions establish that the advantages that these methods exhibit for deterministic optimization problems do not necessarily carry over to stochastic optimization problems. It is found that they have narrower stability ranges and worse steady-state mean-square-error performance than primal methods of the consensus and diffusion type. It is also found that the AH technique can become unstable under a partial observation model, while the other techniques are able to recover the unknown under this scenario. A method to enhance the performance of AL strategies is proposed by tying the selection of the step-size to their regularization parameter. It is shown that this method allows the AL algorithm to approach the performance of consensus and diffusion strategies but that it remains less stable than these other strategies. [ABSTRACT FROM PUBLISHER]
- Published
- 2015
- Full Text
- View/download PDF
41. Linear computational complexity design of constrained optimal ILC.
- Author
-
Haber, Aleksandar, Fraanje, Rufus, and Verhaegen, Michel
- Abstract
In this paper we present a linear computational complexity framework for design and implementation of (constrained) lifted Iterative Learning Control (ILC) systems with quadratic cost. The problem of designing constrained lifted ILC with quadratic cost is formulated as a convex optimization problem. We solve this problem using the primal-dual interior point method. High computational complexity of the primal-dual method, which render this method computationally infeasible for high dimensional lifted ILC systems, is significantly decreased by exploiting the sequentially semi-separable (SSS) structure of lifted system matrices. More precisely, O(N3) computational cost of one iteration of the primal-dual method is reduced to O(N), where N characterizes the size of the lifted system matrices. Furthermore, by exploiting the SSS structure the large lifted system matrices can be efficiently stored in computer memory. We also show that SSS structure can be exploited to efficiently implement analytical solution of the unconstrained lifted ILC problem with quadratic cost and for calculation of the norm and stability radius of ILC system. [ABSTRACT FROM PUBLISHER]
- Published
- 2011
- Full Text
- View/download PDF
42. Logarithmic barrier decomposition-based interior point methods for stochastic symmetric programming.
- Author
-
Alzalg, Baha and Ariyawansa, K.A.
- Subjects
- *
LOGARITHMIC functions , *MATHEMATICAL decomposition , *INTERIOR-point methods , *MATHEMATICAL symmetry , *STOCHASTIC programming , *LINEAR systems - Abstract
We introduce and study two-stage stochastic symmetric programs with recourse to handle uncertainty in data defining (deterministic) symmetric programs in which a linear function is minimized over the intersection of an affine set and a symmetric cone. We present a Benders’ decomposition-based interior point algorithm for solving these problems and prove its polynomial complexity. Our convergence analysis proved by showing that the log barrier associated with the recourse function of stochastic symmetric programs behaves a strongly self-concordant barrier and forms a self-concordant family on the first stage solutions. Since our analysis applies to all symmetric cones, this algorithm extends Zhao’s results [G. Zhao, A log barrier method with Benders’ decomposition for solving two-stage stochastic linear programs, Math. Program. Ser. A 90 (2001) 507–536] for two-stage stochastic linear programs, and Mehrotra and Özevin’s results [S. Mehrotra, M.G. Özevin, Decomposition-based interior point methods for two-stage stochastic semidefinite programming, SIAM J. Optim. 18 (1) (2007) 206–222] for two-stage stochastic semidefinite programs. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
43. RATE OF CONVERGENCE ANALYSIS OF DECOMPOSITION METHODS BASED ON THE PROXIMAL METHOD OF MULTIPLIERS FOR CONVEX MINIMIZATION.
- Author
-
SHEFI, RON and TEBOULLE, MARC
- Subjects
- *
DECOMPOSITION method , *MULTIPLIERS (Mathematical analysis) , *CONVEX programming , *ALGORITHMS , *MATHEMATICAL programming - Abstract
This paper presents two classes of decomposition algorithms based on the proximal method of multipliers (PMM) introduced in the mid-1970s by Rockafellar for convex minimization. We first show that the PMM framework is at the root of many past and recent decomposition schemes suggested in the literature allowing for an elementary analysis of these methods through a unified scheme. We then prove various sublinear global convergence rate results for the two classes of PMM based decomposition algorithms for function values and constraints violation. Furthermore, under a mild assumption on the problem's data we derive rate of convergence results in terms of the original primal function values for both classes. As a by-product of our analysis we also obtain convergence of the sequences produced by the two algorithm classes to optimal primal-dual solutions. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
44. A Multi-Agent Primal-Dual Strategy for Composite Optimization over Distributed Features
- Author
-
Ali H. Sayed, Sulaiman A. Alghunaim, and Ming Yan
- Subjects
FOS: Computer and information sciences ,Mathematical optimization ,decentralized composite optimization ,Computer Science - Machine Learning ,Optimization problem ,Computer science ,linear convergence ,Regular polygon ,020206 networking & telecommunications ,02 engineering and technology ,Function (mathematics) ,Machine Learning (cs.LG) ,Rate of convergence ,Optimization and Control (math.OC) ,0202 electrical engineering, electronic engineering, information engineering ,FOS: Mathematics ,Resource allocation ,020201 artificial intelligence & image processing ,distributed learning ,primal-dual methods ,Mathematics - Optimization and Control ,Saddle - Abstract
This work studies multi-agent sharing optimization problems with the objective function being the sum of smooth local functions plus a convex (possibly non-smooth) function coupling all agents. This scenario arises in many machine learning and engineering applications, such as regression over distributed features and resource allocation. We reformulate this problem into an equivalent saddle-point problem, which is amenable to decentralized solutions. We then propose a proximal primal-dual algorithm and establish its linear convergence to the optimal solution when the local functions are strongly-convex. To our knowledge, this is the first linearly convergent decentralized algorithm for multi-agent sharing problems with a general convex (possibly non-smooth) coupling function., Comment: To appear in European Signal Processing Conference (EUSIPCO) 2020
- Published
- 2020
- Full Text
- View/download PDF
45. A primal–dual interior point method for a novel type-2 second order cone optimization
- Author
-
Sarowar Morshed, Md. Noor-E-Alam, and Chrysafis Vogiatzis
- Subjects
T57-57.97 ,Mathematical optimization ,Applied mathematics. Quantitative methods ,Computer science ,Order (ring theory) ,Type (model theory) ,Primal dual ,Second order cone optimization ,Kernel functions ,Cone (topology) ,Optimization and Control (math.OC) ,Primal–dual methods ,FOS: Mathematics ,General Earth and Planetary Sciences ,Interior point methods ,Focus (optics) ,Mathematics - Optimization and Control ,Interior point method ,Conic optimization ,General Environmental Science ,Variable (mathematics) - Abstract
In this paper, we define a new, special second order cone as a type-$k$ second order cone. We focus on the case of $k=2$, which can be viewed as SOCO with an additional {\em complicating variable}. For this new problem, we develop the necessary prerequisites, based on previous work for traditional SOCO. We then develop a primal-dual interior point algorithm for solving a type-2 second order conic optimization (SOCO) problem, based on a family of kernel functions suitable for this type-2 SOCO. We finally derive the following iteration bound for our framework: \[\frac{L^\gamma}{\theta \kappa \gamma} \left[2N \psi\left( \frac{\varrho \left(\tau /4N\right)}{\sqrt{1-\theta}}\right)\right]^\gamma\log \frac{3N}{\epsilon}.\]
- Published
- 2021
46. Homogeneous Penalizers and Constraints in Convex Image Restoration.
- Author
-
Ciak, R., Shafei, B., and Steidl, G.
- Abstract
Recently convex optimization models were successfully applied for solving various problems in image analysis and restoration. In this paper, we are interested in relations between convex constrained optimization problems of the form $\operatorname{argmin}\{ \varPhi(x) \mbox{ subject to } \varPsi(x) \le\tau\}$ and their penalized counterparts $\operatorname{argmin}\{\varPhi(x) + \lambda\varPsi(x)\}$. We recall general results on the topic by the help of an epigraphical projection. Then we deal with the special setting Ψ:=∥ L⋅∥ with L∈ℝ and Φ:= φ( H⋅), where H∈ℝ and φ:ℝ→ℝ∪{+∞} meet certain requirements which are often fulfilled in image processing models. In this case we prove by incorporating the dual problems that there exists a bijective function such that the solutions of the constrained problem coincide with those of the penalized problem if and only if τ and λ are in the graph of this function. We illustrate the relation between τ and λ for various problems arising in image processing. In particular, we point out the relation to the Pareto frontier for joint sparsity problems. We demonstrate the performance of the constrained model in restoration tasks of images corrupted by Poisson noise with the I-divergence as data fitting term φ and in inpainting models with the constrained nuclear norm. Such models can be useful if we have a priori knowledge on the image rather than on the noise level. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
47. A Simple New Algorithm for Quadratic Programming with Applications in Statistics.
- Author
-
Meyer, MaryC.
- Subjects
- *
QUADRATIC programming , *ALGORITHMS , *APPLICATION software , *ESTIMATION theory , *LINEAR statistical models , *POLYHEDRAL functions , *PROGRAMMING languages , *LEAST squares - Abstract
Problems involving estimation and inference under linear inequality constraints arise often in statistical modeling. In this article, we propose an algorithm to solve the quadratic programming problem of minimizingfor positive definiteQ, whereis constrained to be in a closed polyhedral convex cone, and them×nmatrixis not necessarily full row rank. The three-step algorithm is intuitive and easy to code. Code is provided in theRprogramming language. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
48. An incremental primal-dual method for nonlinear programming with special structure.
- Author
-
Couellan, Nicolas and Trafalis, Theodore
- Abstract
We propose a new class of incremental primal-dual techniques for solving nonlinear programming problems with special structure. Specifically, the objective functions of the problems are sums of independent nonconvex continuously differentiable terms minimized subject to a set of nonlinear constraints for each term. The technique performs successive primal-dual increments for each decomposition term of the objective function. The primal-dual increments are calculated by performing one Newton step towards the solution of the Karush-Kuhn-Tucker optimality conditions of each subproblem associated with each objective function term. We show that the resulting incremental algorithm is q-linearly convergent under mild assumptions for the original problem. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
49. PDE acceleration: a convergence rate analysis and applications to obstacle problems
- Author
-
Calder, Jeff and Yezzi, Anthony
- Published
- 2019
- Full Text
- View/download PDF
50. Local path-following property of inexact interior methods in nonlinear programming.
- Author
-
Armand, Paul, Benoist, Joël, and Dussault, Jean-Pierre
- Subjects
INTERIOR-point methods ,NONLINEAR systems ,MATHEMATICAL optimization ,NEWTON-Raphson method ,COMPLEMENTARITY (Physics) - Abstract
We study the local behavior of a primal-dual inexact interior point methods for solving nonlinear systems arising from the solution of nonlinear optimization problems or more generally from nonlinear complementarity problems. The algorithm is based on the Newton method applied to a sequence of perturbed systems that follows by perturbation of the complementarity equations of the original system. In case of an exact solution of the Newton system, it has been shown that the sequence of iterates is asymptotically tangent to the central path (Armand and Benoist in Math. Program. 115:199-222, ). The purpose of the present paper is to extend this result to an inexact solution of the Newton system. We give quite general conditions on the different parameters of the algorithm, so that this asymptotic property is satisfied. Some numerical tests are reported to illustrate our theoretical results. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.