1,056 results on '"Subgradient methods"'
Search Results
2. Halpern inertial subgradient extragradient algorithm for solving equilibrium problems in Banach spaces.
- Author
-
Abass, H. A.
- Subjects
- *
BANACH spaces , *SUBGRADIENT methods , *PROBLEM solving , *EXTRAPOLATION , *EQUILIBRIUM , *NONEXPANSIVE mappings - Abstract
In this work, we study an equilibrium problem involving a pseudomonotone bifunction in the context of uniformly smooth and 2-uniformly convex real Banach spaces. For the purpose of solving the fixed point problem of a finite family of multi-valued relatively nonexpansive mappings and the pseudomonotone equilibrium problem, we introduce an inertial subgradient extragradient method and establish its strong convergence. To increase the step size and enhance the performance of our iterative method, we use a new parameter that is independent of the inertial extrapolation method. We point out that our step size is chosen in a self-adaptive manner, making it simple to calculate our iterative algorithm without having to know the Lipschitz constants beforehand. Finally, we give some numerical results for the proposed algorithm and comparison with some other known algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
3. Accuracy Certificates for Convex Minimization with Inexact Oracle.
- Author
-
Gladin, Egor, Gasnikov, Alexander, and Dvurechensky, Pavel
- Subjects
- *
LAGRANGE problem , *SUBGRADIENT methods , *ALGORITHMS - Abstract
Accuracy certificates for convex minimization problems allow for online verification of the accuracy of approximate solutions and provide a theoretically valid online stopping criterion. When solving the Lagrange dual problem, accuracy certificates produce a simple way to recover an approximate primal solution and estimate its accuracy. In this paper, we generalize accuracy certificates for the setting of an inexact first-order oracle, including the setting of primal and Lagrange dual pair of problems. We further propose an explicit way to construct accuracy certificates for a large class of cutting plane methods based on polytopes. As a by-product, we show that the considered cutting plane methods can be efficiently used with a noisy oracle even though they were originally designed to be equipped with an exact oracle. Finally, we illustrate the work of the proposed certificates in the numerical experiments highlighting that our certificates provide a tight upper bound on the objective residual. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
4. An Online Two-Stage Classification Based on Projections.
- Author
-
Song, Aimin, Wang, Yan, and Luan, Shengyang
- Subjects
- *
CLASSIFICATION algorithms , *ONLINE algorithms , *PARALLEL algorithms , *SUBGRADIENT methods , *HILBERT space - Abstract
Kernel-based online classification algorithms, such as the Perceptron, NORMA, and passive-aggressive, are renowned for their computational efficiency but have been criticized for slow convergence. However, the parallel projection algorithm, within the adaptive projected subgradient method framework, exhibits accelerated convergence and enhanced noise resilience. Despite these advantages, a specific sparsification procedure for the parallel projection algorithm is currently absent. Additionally, existing online classification algorithms, including those mentioned earlier, heavily rely on the kernel width parameter, rendering them sensitive to its choices. In an effort to bolster the performance of these algorithms, we propose a two-stage classification algorithm within the Cartesian product space of reproducing kernel Hilbert spaces. In the initial stage, we introduce an online double-kernel classifier with parallel projection. This design aims not only to improve convergence but also to address the sensitivity to kernel width. In the subsequent stage, the component with a larger kernel width remains fixed, while the component with a smaller kernel width undergoes updates. To promote sparsity and mitigate model complexity, we incorporate the projection-along-subspace technique. Moreover, for enhanced computational efficiency, we integrate the set-membership technique into the updates, selectively exploiting informative vectors to improve the classifier. The monotone approximation of the proposed classifier, based on the designed ϵ -insensitive function, is presented. Finally, we apply the proposed algorithm to equalize a nonlinear channel. Simulation results demonstrate that the proposed classifier achieves faster convergence and lower misclassification error with comparable model complexity. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
5. Computing subgradients of convex relaxations for solutions of parametric ordinary differential equations.
- Author
-
Song, Yingkai and Khan, Kamil A.
- Subjects
- *
GLOBAL optimization , *ORDINARY differential equations , *DYNAMICAL systems , *SUBGRADIENT methods , *EVALUATION methodology - Abstract
A novel subgradient evaluation method is proposed for nonsmooth convex relaxations of parametric solutions of ordinary differential equations (ODEs) arising in global dynamic optimization, assuming that the relaxations always lie strictly within interval bounds during integration. We argue that this assumption is reasonable in practice. These subgradients are computed as the unique solution of an auxiliary parametric affine ODE, analogous to classical forward/tangent sensitivity evaluation methods for smooth dynamic systems. Unlike established subgradient evaluation approaches for nonsmooth dynamic systems, this new method does not require smoothness or transversality assumptions, and is compatible with existing subgradient evaluation methods for closed-form convex functions, as implemented in subgradient evaluation software such as EAGO.jl and MC ++. Moreover, we show that a subgradient for a lower-bounding problem in global dynamic optimization can be directly evaluated using reverse/adjoint sensitivity analysis, which may reduce the overall computational effort for an overarching global optimization method. Numerical examples are presented, based on a proof-of-concept implementation in Julia. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
6. Perturbation Method and Regularization of the Lagrange Principle in Nonlinear Constrained Optimization Problems.
- Author
-
Sumin, M. I.
- Subjects
- *
MATHEMATICAL forms , *CONSTRAINED optimization , *COMPUTATIONAL mathematics , *LAGRANGE multiplier , *SUBGRADIENT methods - Abstract
The Lagrange principle in nondifferential form is regularized for a nonlinear (nonconvex) constrained optimization problem with an operator equality constraint in a Hilbert space. The feasible set of the problem belongs to a complete metric space, and the existence of its solution is not assumed a priori. The equality constraint involves an additive parameter, so a "nonlinear version" of the perturbation method can be used to study the problem. The regularized Lagrange principle is used mainly for the stable generation of generalized minimizing sequences (GMSes) in the considered nonlinear problem. It can be treated as a GMS-generating (regularizing) operator that takes each set of initial data of the problem to a subminimal (minimal) of its regular augmented Lagrangian corresponding to this set with the dual variable generated by the Tikhonov stabilization procedure for the dual problem. The augmented Lagrangian is completely determined by the form of "nonlinear" subdifferentials of the lower semicontinuous and generally nonconvex value function, which is regarded as a function of parameters of the problem. As such subdifferentials, we use the proximal subgradient and the Fréchet subdifferential, which are well known in nonsmooth (nonlinear) analysis. The regularized Lagrange principle overcomes the ill-posedness of its classical counterpart and can be treated as a regularizing algorithm, thus providing the theoretical basis for the development of stable methods for practically solving nonlinear constrained optimization problems. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
7. Self-Adaptive Extragradient Algorithms for Quasi-Equilibrium Problems.
- Author
-
Van Thang, Tran and Le, Xuan Thanh
- Subjects
- *
SUBGRADIENT methods , *HILBERT space , *QUASI-equilibrium , *PROBLEM solving , *ALGORITHMS - Abstract
We propose two iterative algorithms for solving two classes of quasi-equilibrium problems in Hilbert spaces: pseudomonotone and quasimonotone ones. The algorithms combine the subgradient method and the projection method with self-adaptive step sizes. Convergence of our proposed algorithms requires a condition that is milder than the one commonly used in the existing papers. Numerical experiments show that our algorithms are efficient and competitive to other extragradient-type, projection-type, and proximal point algorithms in solving the problem. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
8. Primal Subgradient Methods with Predefined Step Sizes.
- Author
-
Nesterov, Yurii
- Subjects
- *
SUBGRADIENT methods , *COMPUTATIONAL mathematics , *MATHEMATICAL optimization , *APPLIED mathematics , *PROBLEM solving , *LAGRANGE multiplier , *NONSMOOTH optimization - Abstract
In this paper, we suggest a new framework for analyzing primal subgradient methods for nonsmooth convex optimization problems. We show that the classical step-size rules, based on normalization of subgradient, or on knowledge of the optimal value of the objective function, need corrections when they are applied to optimization problems with constraints. Their proper modifications allow a significant acceleration of these schemes when the objective function has favorable properties (smoothness, strong convexity). We show how the new methods can be used for solving optimization problems with functional constraints with a possibility to approximate the optimal Lagrange multipliers. One of our primal-dual methods works also for unbounded feasible set. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. Polyak Minorant Method for Convex Optimization.
- Author
-
Devanathan, Nikhil and Boyd, Stephen
- Subjects
- *
SUBGRADIENT methods , *CONVEX functions , *GENERALIZATION , *ALGORITHMS - Abstract
In 1963 Boris Polyak suggested a particular step size for gradient descent methods, now known as the Polyak step size, that he later adapted to subgradient methods. The Polyak step size requires knowledge of the optimal value of the minimization problem, which is a strong assumption but one that holds for several important problems. In this paper we extend Polyak's method to handle constraints and, as a generalization of subgradients, general minorants, which are convex functions that tightly lower bound the objective and constraint functions. We refer to this algorithm as the Polyak Minorant Method (PMM). It is closely related to cutting-plane and bundle methods. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
10. Three novel inertial subgradient extragradient methods for quasi-monotone variational inequalities in Banach spaces.
- Author
-
Zhong-bao Wang, Pongsakorn Sunthrayuth, Ratthaprom Promkam, and Abubakar Adamu
- Subjects
COST functions ,SUBGRADIENT methods ,BANACH spaces ,CONVEX sets ,ALGORITHMS - Abstract
In this paper, we introduce three new inertial subgradient extragradient methods for solving variational inequalities involving quasi-monotone operators in the setting of 2-uniformly convex and uniformly smooth Banach spaces. We dispense with the well-known requirement of the stepsizes of the subgradient extragradient method on the prior knowledge of the Lipschitz constant of the cost function in our proposed algorithms. Furthermore, we give many numerical examples to test the robustness of our proposed algorithms and compare their performance with several algorithms in the literature. In addition, we use our proposed algorithms in the restoration process of some degraded images and compare the quality of the restored images using our proposed algorithms and some recent algorithms in the literature. Finally, from the results of the numerical simulations, our proposed algorithms are competitive and promising. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
11. A two-step inertial algorithm for solving equilibrium problems on Hadamard manifolds.
- Author
-
ABASS, H. A., ADAMU, A., and APHANE, M.
- Subjects
- *
SUBGRADIENT methods , *RIEMANNIAN manifolds , *PROBLEM solving , *EQUILIBRIUM , *ALGORITHMS - Abstract
In the context of Hadamard manifolds, we develop a two-step inertial subgradient extragradient method for approximating solutions of equilibrium problems involving a pseudomonotone operator. To avoid dependency of the step-size on the Lipschitz constant of the underlying operator, we propose an iterative method with a self-adaptive step size that can increase with each iteration. Furthermore, we prove a strong convergence result concerning the sequence generated by our two-step inertial subgradient extragradient method in the setting of Hadamard manifolds. To illustrate the performance of our iterative method relative to other methods of a similar nature, we provide some numerical examples. The results presented in this article are new in this space and extend many related findings in the literature. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
12. Joint Optimization Algorithm for Latency-Energy and Privacy Combining Discrete Differential Evolution and Convex Optimization.
- Author
-
Deng, Yun, Li, Jinyong, and Chen, Shouxue
- Subjects
- *
OPTIMIZATION algorithms , *SUBGRADIENT methods , *POWER transmission , *COMPUTATIONAL complexity , *ENERGY consumption , *DIFFERENTIAL evolution - Abstract
A privacy protection task offloading algorithm combining discrete differential evolution and convex optimization is proposed to resolve the new position privacy leakage problem for mobile terminal due to transferring tasks to multiple edge servers for computation via wireless networks. First, the privacy cost of offloading decision is defined for the characteristics of the position privacy problem combined with the standard deviation theory. Second, the objective problem is modeled as a joint optimization of position privacy protection, offloading decision and transmission power, and the total cost of task offloading is an optimization objective. Finally, to handle the objective problem efficiently, offloading decision variable is decoupled from the transmission power variable to decrease the computational complexity of the problem; the discrete differential evolution algorithm is designed to accumulate the approximate optimal offloading decision, and the optimal transmission power is efficiently solved by variable substitution method and subgradient method. Simulations reveal that the algorithm has excellent convergence; it improves the position privacy protection level of mobile terminal and reduces the total cost of task offloading while maintaining lower energy consumption and latency. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
13. Two subgradient extragradient methods based on the golden ratio technique for solving variational inequality problems.
- Author
-
Oyewole, Olawale K. and Reich, Simeon
- Subjects
- *
GOLDEN ratio , *SUBGRADIENT methods , *HILBERT space , *ALGORITHMS , *VARIATIONAL inequalities (Mathematics) - Abstract
We propose and study two new methods based on the golden ratio technique for approximating solutions to variational inequality problems in Hilbert space. The first method combines the golden ratio technique with the subgradient extragradient method. In the second method, we incorporate the alternating golden ratio technique into the subgradient extragradient method. Both methods use self-adaptive step sizes which are allowed to increase during the execution of the algorithms, thus limiting the dependence of our methods on the starting point of the scaling parameter. We prove that under appropriate conditions, the resulting methods converge either weakly or R-linearly to a solution of the variational inequality problem associated with a pseudomonotone operator. In order to show the numerical advantage of our methods, we first present the results of several pertinent numerical experiments and then compare the performance of our proposed methods with that of some existing methods which can be found in the literature. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
14. A constraint dissolving approach for nonsmooth optimization over the Stiefel manifold.
- Author
-
Hu, Xiaoyin, Xiao, Nachuan, Liu, Xin, and Toh, Kim-Chuan
- Subjects
- *
SUBGRADIENT methods , *NEIGHBORHOODS , *NONSMOOTH optimization - Abstract
This paper focuses on the minimization of a possibly nonsmooth objective function over the Stiefel manifold. The existing approaches either lack efficiency or can only tackle prox-friendly objective functions. We propose a constraint dissolving function named NCDF and show that it has the same first-order stationary points and local minimizers as the original problem in a neighborhood of the Stiefel manifold. Furthermore, we show that the Clarke subdifferential of NCDF is easy to achieve from the Clarke subdifferential of the objective function. Therefore, various existing approaches for unconstrained nonsmooth optimization can be directly applied to nonsmooth optimization problems over the Stiefel manifold. We propose a framework for developing subgradient-based methods and establishing their convergence properties based on prior works. Furthermore, based on our proposed framework, we can develop efficient approaches for optimization over the Stiefel manifold. Preliminary numerical experiments further highlight that the proposed constraint dissolving approach yields efficient and direct implementations of various unconstrained approaches to nonsmooth optimization problems over the Stiefel manifold. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
15. Distributed Dual Subgradient Methods with Averaging and Applications to Grid Optimization.
- Author
-
Liu, Haitian, Bose, Subhonmesh, Nguyen, Hoa Dinh, Guo, Ye, Doan, Thinh T., and Beck, Carolyn L.
- Subjects
- *
SUBGRADIENT methods , *ELECTRICAL load , *POWER resources , *MATHEMATICAL optimization , *ALGORITHMS - Abstract
We study finite-time performance of a recently proposed distributed dual subgradient (DDSG) method for convex-constrained multi-agent optimization problems. The algorithm enjoys performance guarantees on the last primal iterate, as opposed to those derived for ergodic means for standard DDSG algorithms. Our work improves the recently published convergence rate of O (log T / T) with decaying step-sizes to O (1 / T) with constant step-size on a metric that combines sub-optimality and constraint violation. We then numerically evaluate the algorithm on three grid optimization problems. Namely, these are tie-line scheduling in multi-area power systems, coordination of distributed energy resources in radial distribution networks, and joint dispatch of transmission and distribution assets. The DDSG algorithm applies to each problem with various relaxations and linearizations of the power flow equations. The numerical experiments illustrate various properties of the DDSG algorithm–comparison with standard DDSG, impact of the number of agents, and why Nesterov-style acceleration can fail in DDSG settings. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
16. A subgradient projection method for quasiconvex minimization.
- Author
-
Choque, Juan, Lara, Felipe, and Marcavillaca, Raúl T.
- Subjects
SUBGRADIENT methods ,SUBDIFFERENTIALS ,MATHEMATICS ,ALGORITHMS - Abstract
In this paper, a subgradient projection method for quasiconvex minimization problems is provided. By employing strong subdifferentials, it is proved that the generated sequence of the proposed algorithm converges to the solution of the minimization problem of a proper, lower semicontinuous, and strongly quasiconvex function (in the sense of Polyak in Soviet Math 7:72–75, 1966), under the same assumptions as those required for convex functions with the convex subdifferentials. Furthermore, a quasi-linear convergence rate of the iterates, extending similar results for the general quasiconvex case, is also provided. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
17. Solving discrete network design problem using disjunctive constraints.
- Author
-
Mirzahossein, H., Najafi, P., Kalantari, N., and Waller, T.
- Subjects
- *
TRAVEL time (Traffic engineering) , *TRANSPORTATION planning , *DETERMINISTIC algorithms , *BILEVEL programming , *SUBGRADIENT methods - Abstract
This paper introduces a deterministic algorithm to solve the discrete network design problem (DNDP) efficiently. This non‐convex bilevel optimization problem is well‐known as an non deterministic polynomial (NP)‐hard problem in strategic transportation planning. The proposed algorithm optimizes budget allocation for large‐scale network improvements deterministically and with computational efficiency. It integrates disjunctive programming with an improved partial linearized subgradient method to enhance performance without significantly affecting solution quality. We evaluated our algorithm on the mid‐scale Sioux Falls and large‐scale Chicago networks. We assess the proposed algorithm's accuracy by examining the objective function's value, specifically the total travel time within the network. When tested on the mid‐scale Sioux Falls network, the algorithm achieved an average 46% improvement in computational efficiency, compared to the best‐performing method discussed in this paper, albeit with a 4.17% higher total travel time than the most accurate one, as the value of the objective function. In the application to the large‐scale Chicago network, the efficiency improved by an average of 99.48% while the total travel time experienced a 4.34% increase. These findings indicate that the deterministic algorithm proposed in this research improves the computational speed while presenting a limited trade‐off with solution precision. This deterministic approach offers a structured, predictable, and repeatable method for solving DNDP, which can advance transportation planning, particularly for large‐scale network applications where computational efficiency is paramount. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
18. AN-SPS: adaptive sample size nonmonotone line search spectral projected subgradient method for convex constrained optimization problems.
- Author
-
Krklec Jerinkić, Nataša and Ostojić, Tijana
- Subjects
- *
SUBGRADIENT methods , *MATHEMATICAL forms , *CONSTRAINED optimization , *SAMPLE size (Statistics) , *MATHEMATICAL functions , *NONSMOOTH optimization - Abstract
We consider convex optimization problems with a possibly nonsmooth objective function in the form of a mathematical expectation. The proposed framework (AN-SPS) employs Sample Average Approximations (SAA) to approximate the objective function, which is either unavailable or too costly to compute. The sample size is chosen in an adaptive manner, which eventually pushes the SAA error to zero almost surely (a.s.). The search direction is based on a scaled subgradient and a spectral coefficient, both related to the SAA function. The step size is obtained via a nonmonotone line search over a predefined interval, which yields a theoretically sound and practically efficient algorithm. The method retains feasibility by projecting the resulting points onto a feasible set. The a.s. convergence of AN-SPS method is proved without the assumption of a bounded feasible set or bounded iterates. Preliminary numerical results on Hinge loss problems reveal the advantages of the proposed adaptive scheme. In addition, a study of different nonmonotone line search strategies in combination with different spectral coefficients within AN-SPS framework is also conducted, yielding some hints for future work. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
19. Global stability of first-order methods for coercive tame functions.
- Author
-
Josz, Cédric and Lai, Lexiao
- Subjects
- *
SUBGRADIENT methods , *DIFFERENTIAL inclusions , *POINT set theory , *GEOMETRY , *NEIGHBORHOODS - Abstract
We consider first-order methods with constant step size for minimizing locally Lipschitz coercive functions that are tame in an o-minimal structure on the real field. We prove that if the method is approximated by subgradient trajectories, then the iterates eventually remain in a neighborhood of a connected component of the set of critical points. Under suitable method-dependent regularity assumptions, this result applies to the subgradient method with momentum, the stochastic subgradient method with random reshuffling and momentum, and the random-permutations cyclic coordinate descent method. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
20. Mann-Type Inertial Accelerated Subgradient Extragradient Algorithm for Minimum-Norm Solution of Split Equilibrium Problems Induced by Fixed Point Problems in Hilbert Spaces.
- Author
-
Khonchaliew, Manatchanok, Khamdam, Kunlanan, and Petrot, Narin
- Subjects
- *
HILBERT space , *SUBGRADIENT methods , *LINEAR operators , *PRIOR learning , *EQUILIBRIUM , *MONOTONE operators , *NONEXPANSIVE mappings - Abstract
This paper presents the Mann-type inertial accelerated subgradient extragradient algorithm with non-monotonic step sizes for solving the split equilibrium and fixed point problems relating to pseudomonotone and Lipschitz-type continuous bifunctions and nonexpansive mappings in the framework of real Hilbert spaces. By sufficient conditions on the control sequences of the parameters of concern, the strong convergence theorem to support the proposed algorithm, which involves neither prior knowledge of the Lipschitz constants of bifunctions nor the operator norm of the bounded linear operator, is demonstrated. Some numerical experiments are performed to show the efficacy of the proposed algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
21. Intertwined supply network design under facility and transportation disruption from the viability perspective.
- Author
-
Mozhu Wang and Jianming Yao
- Subjects
SUBGRADIENT methods ,FLEXIBLE structures ,MEDICAL equipment ,NUMERICAL analysis ,PROBLEM solving ,DEMAND function - Abstract
To ensure viability, it is necessary for an intertwined supply network (ISN) system to optimise network structure to provide flexible redundancy in response to changing environment. Considering resilience methods are usually designed as reactions to single discrete disruptions rather than situational reactions to real-time changes, this study proposes a novel redundancy optimisation approach dynamically providing each demand market with a pair of supply routes to optimise the flexible redundancy of ISN, thereby ensuring the survivability of supply chains and demand markets under continuous changes. Based on this, we propose the ISN design (ISND) model to capture the trade-off between total cost and viability performance under facility and transportation disruption. The Lagrangian relaxation algorithm, combined with the sub-gradient method and the improved cellular genetic algorithm, are utilised for solving problems of different scales. To test the performance of the model and corresponding algorithms, we also conduct a numerical analysis of the data from medical equipment ISN in southern China. The results indicate that the ISND model can effectively optimise ISN structures, which makes it possible to dynamically provide flexible redundancy; the two algorithms also show good calculation efficiency. The relationship between ISN structure and viability performance is thus observed and explained. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
22. A Superlinearly Convergent Subgradient Method for Sharp Semismooth Problems.
- Author
-
Charisopoulos, Vasileios and Davis, Damek
- Subjects
SUBGRADIENT methods ,OPTIMIZATION algorithms ,NONSMOOTH optimization ,CONVEX functions - Abstract
Subgradient methods comprise a fundamental class of nonsmooth optimization algorithms. Classical results show that certain subgradient methods converge sublinearly for general Lipschitz convex functions and converge linearly for convex functions that grow sharply away from solutions. Recent work has moreover extended these results to certain nonconvex problems. In this work, we seek to improve the complexity of these algorithms by asking the following question. Is it possible to design a superlinearly convergent subgradient method? We provide a positive answer to this question for a broad class of sharp semismooth functions. Funding: The research of D. Davis was supported by the Division of Mathematical Sciences [Grant 2047637] and the Alfred P. Sloan Foundation. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
23. A new double inertial subgradient extragradient method for solving a non-monotone variational inequality problem in Hilbert space.
- Author
-
Ziqi Zhu, Kaiye Zheng, and Shenghua Wang
- Subjects
SUBGRADIENT methods ,HILBERT space ,VARIATIONAL inequalities (Mathematics) - Abstract
In this paper, we introduced a new double inertial subgradient extragradient method for solving a variational inequality problem in Hilbert space. In our method, the mapping needed not to satisfy any assumption of monotonicity and two different self-adaptive step sizes were used for avoiding the need of Lipschitz constant of the mapping. The strong convergence of the proposed method was proved under some new conditions. Finally, some numerical examples were presented to illustrate the convergence of our method and compare with some related methods in the literature. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
24. A new spectral conjugate subgradient method with application in computed tomography image reconstruction.
- Author
-
Loreto, M., Humphries, T., Raghavan, C., Wu, K., and Kwak, S.
- Subjects
- *
IMAGE reconstruction , *SUBGRADIENT methods , *COMPUTED tomography , *NONSMOOTH optimization , *CONJUGATE gradient methods - Abstract
A new spectral conjugate subgradient method is presented to solve nonsmooth unconstrained optimization problems. The method combines the spectral conjugate gradient method for smooth problems with the spectral subgradient method for nonsmooth problems. We study the effect of two different choices of line search, as well as three formulas for determining the conjugate directions. In addition to numerical experiments with standard nonsmooth test problems, we also apply the method to several image reconstruction problems in computed tomography, using total variation regularization. Performance profiles are used to compare the performance of the algorithm using different line search strategies and conjugate directions to that of the original spectral subgradient method. Our results show that the spectral conjugate subgradient algorithm outperforms the original spectral subgradient method, and that the use of the Polak–Ribière formula for conjugate directions provides the best and most robust performance. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
25. TWO NOVEL ALGORITHMS FOR SOLVING VARIATIONAL NEQUALITY PROBLEMS GOVERNED BY FIXED POINT PROBLEMS AND THEIR APPLICATIONS.
- Author
-
SHANSHAN XU, BING TAN, and SONGXIAO LI
- Subjects
- *
SUBGRADIENT methods , *HILBERT space , *ALGORITHMS , *VARIATIONAL inequalities (Mathematics) - Abstract
We study the problem of finding a common solution to the variational inequality problem with a pseudomonotone and Lipschitz continuous operator and the fixed point problem with a demicontractive mapping in real Hilbert spaces. Inspired by the inertial method and the subgradient extragradient method, two improved viscosity-type efficient iterative methods with a new adaptive non-monotonic step size criterion are proposed. We prove that the strong convergence theorems of these new methods hold under some standard and mild conditions. Numerical examples in finite-and infinite-dimensional spaces are provided to illustrate the effectiveness and potential applicability of the suggested iterative methods compared to some known ones. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
26. Mini-batch stochastic subgradient for functional constrained optimization.
- Author
-
Singh, Nitesh Kumar, Necoara, Ion, and Kungurtsev, Vyacheslav
- Subjects
- *
SUBGRADIENT methods , *CONVEX sets , *DIFFERENTIABLE functions , *MACHINE learning , *CONSTRAINED optimization - Abstract
In this paper, we consider finite sum composite optimization problems with many functional constraints. The objective function is expressed as a finite sum of two terms, one of which admits easy computation of (sub)gradients while the other is amenable to proximal evaluations. We assume a generalized bounded gradient condition on the objective which allows us to simultaneously tackle both smooth and nonsmooth problems. We also consider the cases of both with and without a quadratic functional growth property. Further, we assume that each constraint set is given as the level set of a convex but not necessarily a differentiable function. We reformulate the constrained finite sum problem into a stochastic optimization problem for which the stochastic subgradient projection method from Necoara and Singh [Stochastic subgradient projection methods for composite optimization with functional constraints; 2022 Journal of Machine Learning Research, 23, 1–35] specializes in a collection of mini-batch variants, with different mini-batch sizes for the objective function and functional constraints, respectively. More specifically, at each iteration, our algorithm takes a mini-batch stochastic proximal subgradient step aimed at minimizing the objective function and then a subsequent mini-batch subgradient projection step minimizing the feasibility violation. By specializing in different mini-batching strategies, we derive exact expressions for the stepsizes as a function of the mini-batch size and in some cases we also derive insightful stepsize-switching rules which describe when one should switch from a constant to a decreasing stepsize regime. We also prove sublinear convergence rates for the mini-batch subgradient projection algorithm which depend explicitly on the mini-batch sizes and on the properties of the objective function. Numerical results also show a better performance of our mini-batch scheme over its single-batch counterpart. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
27. A FUNCTIONAL MODEL METHOD FOR NONCONVEX NONSMOOTH CONDITIONAL STOCHASTIC OPTIMIZATION.
- Author
-
RUSZCZYŃSKI, ANDRZEJ and SHANGZHE YANG
- Subjects
- *
MACHINE learning , *SMOOTHNESS of functions , *SUBGRADIENT methods , *CONDITIONAL expectations , *DIFFERENTIABLE functions - Abstract
We consider stochastic optimization problems involving an expected value of a nonlinear function of a base random vector and a conditional expectation of another function depending on the base random vector, a dependent random vector, and the decision variables. We call such problems conditional stochastic optimization problems. They arise in many applications, such as uplift modeling, reinforcement learning, and contextual optimization. We propose a specialized single time-scale stochastic method for nonconvex constrained conditional stochastic optimization problems with a Lipschitz smooth outer function and a generalized differentiable inner function. In the method, we approximate the inner conditional expectation with a rich parametric model whose mean squared error satisfies a stochastic version of a\Lojasiewicz condition. The model is used by an inner learning algorithm. The main feature of our approach is that unbiased stochastic estimates of the directions used by the method can be generated with one observation from the joint distribution per iteration, which makes it applicable to real-time learning. The directions, however, are not gradients or subgradients of any overall objective function. We prove the convergence of the method with probability one, using the method of differential inclusions and a specially designed Lyapunov function, involving a stochastic generalization of the Bregman distance. Finally, a numerical illustration demonstrates the viability of our approach. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
28. Long term dynamics of the subgradient method for Lipschitz path differentiable functions.
- Author
-
Bolte, Jérôme, Pauwels, Edouard, and Ríos-Zertuche, Rodolfo
- Subjects
- *
SUBGRADIENT methods , *MATHEMATICAL functions , *LIPSCHITZ spaces , *OSCILLATIONS , *STOCHASTIC convergence - Abstract
We consider the long-term dynamics of the vanishing stepsize subgradient method in the case when the objective function is neither smooth nor convex. We assume that this function is locally Lipschitz and path differentiable, i.e., admits a chain rule. Our study departs from other works in the sense that we focus on the behavior of the oscillations, and to do this we use closed measures, a concept that complements the technique of asymptotic pseudotrajectories developed in this setting by Benaïm--Hofbauer--Sorin. We recover known convergence results, establish new ones, and show a local principle of oscillation compensation for the velocities. Roughly speaking, the time average of gradients around one limit point vanishes. Various cases are discussed, providing new insight into the oscillation and the stabilization phenomena. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
29. Convergence of distributed approximate subgradient method for minimizing convex function with convex functional constraints.
- Author
-
Jedsadapong Pioon, Narin Petrot, and Nimit Nimana
- Subjects
SUBGRADIENT methods ,CONVEX functions - Abstract
In this paper, we investigate the distributed approximate subgradient-type method for minimizing a sum of differentiable and non-differentiable convex functions subject to nondifferentiable convex functional constraints in a Euclidean space. We establish the convergence of the sequence generated by our method to an optimal solution of the problem under consideration. Moreover, we derive a convergence rate of order O(N
1−a ) for the objective function values, where a ∈ (0.5, 1). Finally, we provide a numerical example illustrating the effectiveness of the proposed method. [ABSTRACT FROM AUTHOR]- Published
- 2024
- Full Text
- View/download PDF
30. Modified mildly inertial subgradient extragradient method for solving pseudomonotone equilibrium problems and nonexpansive fixed point problems.
- Author
-
Akutsah, Francis, Mebawondu, Akindele Adebayo, Ofem, Austine Efut, George, Reny, Nabwey, Hossam A., and Narain, Ojen Kumar
- Subjects
SUBGRADIENT methods ,HILBERT space ,EQUILIBRIUM ,PROBLEM solving ,NONEXPANSIVE mappings - Abstract
This paper presents and examines a newly improved linear technique for solving the equilibrium problem of a pseudomonotone operator and the fixed point problem of a nonexpansive mapping within a real Hilbert space framework. The technique relies two modified mildly inertial methods and the subgradient extragradient approach. In addition, it can be viewed as an advancement over the previously known inertial subgradient extragradient approach. Based on common assumptions, the algorithm’s weak convergence has been established. Finally, in order to confirm the efficiency and benefit of the proposed algorithm, we present a few numerical experiments. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
31. Distributed primal-dual method on unbalanced digraphs with row stochasticity.
- Author
-
Sakuma, Hiroaki, Hayashi, Naoki, and Takai, Shigemasa
- Subjects
- *
COST functions , *STOCHASTIC matrices , *SUBGRADIENT methods , *DIRECTED graphs , *INFORMATION networks , *EIGENVECTORS , *DISTRIBUTED algorithms - Abstract
This study investigates distributed convex optimisation that accounts for the inequality constraints over unbalanced directed graphs. In distributed optimisation, agents exchange information on a network to obtain an optimal solution when they only know their own cost function. We propose a distributed primal-dual subgradient method based on a row stochastic weight matrix that is associated with a communication network. In the proposed method, the normalised left eigenvector of the weight matrix is estimated by a consensus algorithm. Then, the subgradient of the Lagrange function is scaled by the estimated values of the left eigenvector to compensate for the imbalance of the information flow in the network. We show that the pair of estimations for the primal and dual problems converge to an optimal primal-dual solution. We also show the relation between the convergence rate of the proposed algorithm and the step-size rule. A numerical example confirms the validity of the proposed method. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
32. An Inertial Type Subgradient Extragradient Method for Common Fixed Point and Variational Inequalities Problems in Real Banach Space.
- Author
-
Ali, B. and Ajio, J. T.
- Subjects
- *
BANACH spaces , *SUBGRADIENT methods , *PRIOR learning , *ALGORITHMS , *NONEXPANSIVE mappings , *SELF - Abstract
In this paper, we introduce a new inertial type algorithm with self - adaptive step - size technique for approximating a common element of the set of solutions of pseudomonotone variational inequality problem and the set of common fixed point of a finite family of generic generalized nonspreading mappings in uniformly smooth and 2 - uniformly convex real Banach space. Furthermore, we prove a strong convergence theorem of our algorithm without prior knowledge of the Lipschitz constant of the operator under some mild assumptions. We also give numerical examples to illustrate the performance of our algorithm. Our result generalize and improve many existing results in the literature. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
33. Newtonian Property of Subgradient Method with Optimization of Metric Matrix Parameter Correction.
- Author
-
Tovbis, Elena, Krutikov, Vladimir, and Kazakovtsev, Lev
- Subjects
- *
SUBGRADIENT methods , *NEWTON-Raphson method , *DERIVATIVES (Mathematics) , *QUASI-Newton methods , *SMOOTHNESS of functions , *MATRIX inversion , *NONSMOOTH optimization - Abstract
The work proves that under conditions of instability of the second derivatives of the function in the minimization region, the estimate of the convergence rate of Newton's method is determined by the parameters of the irreducible part of the conditionality degree of the problem. These parameters represent the degree of difference between eigenvalues of the matrices of the second derivatives in the coordinate system, where this difference is minimal, and the resulting estimate of the convergence rate subsequently acts as a standard. The paper studies the convergence rate of the relaxation subgradient method (RSM) with optimization of the parameters of two-rank correction of metric matrices on smooth strongly convex functions with a Lipschitz gradient without assumptions about the existence of second derivatives of the function. The considered RSM is similar in structure to quasi-Newton minimization methods. Unlike the latter, its metric matrix is not an approximation of the inverse matrix of second derivatives but is adjusted in such a way that it enables one to find the descent direction that takes the method beyond a certain neighborhood of the current minimum as a result of one-dimensional minimization along it. This means that the metric matrix enables one to turn the current gradient into a direction that is gradient-consistent with the set of gradients of some neighborhood of the current minimum. Under broad assumptions on the parameters of transformations of metric matrices, an estimate of the convergence rate of the studied RSM and an estimate of its ability to exclude removable linear background are obtained. The obtained estimates turn out to be qualitatively similar to estimates for Newton's method. In this case, the assumption of the existence of second derivatives of the function is not required. A computational experiment was carried out in which the quasi-Newton BFGS method and the subgradient method under study were compared on various types of smooth functions. The testing results indicate the effectiveness of the subgradient method in minimizing smooth functions with a high degree of conditionality of the problem and its ability to eliminate the linear background that worsens the convergence. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
34. Inertial subgradient extragradient method for solving pseudomonotone variational inequality problems in Banach spaces.
- Author
-
Peng, Zai-Yun, Peng, Zhi-Ying, Cai, Gang, and Li, Gao-Xi
- Subjects
- *
BANACH spaces , *SUBGRADIENT methods , *VARIATIONAL inequalities (Mathematics) - Abstract
In this paper, an inertial subgradient extragradient algorithm is proposed to solve the pseudomonotone variational inequality problems in Banach space. This iterative scheme employs a new line-search rule. Strong convergence theorems for the proposed algorithms are established under the assumptions that the operators are non-Lipschitz continuous. Furthermore, several numerical experiments are given to show that our method has better convergence performance than the known ones in the literatures. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
35. Mean–standard-deviation-based electric vehicle routing problem with time windows using Lagrangian relaxation and extended alternating direction method of multipliers-based decomposition algorithm.
- Author
-
Xia, Jieman, He, Zhou, Wang, Shuolei, Liu, Siliang, and Zhang, Shuai
- Subjects
- *
VEHICLE routing problem , *DECOMPOSITION method , *TRAVEL time (Traffic engineering) , *ELECTRIC vehicles , *SUBGRADIENT methods , *UNCERTAIN systems , *SPACE-time block codes - Abstract
In response to the challenges of global warming to sustainable development, electric vehicles (EVs) have become an important part of the supply chain. Numerous studies have focused on the electric vehicle routing problem with time windows (EVRPTW) to design an optimal routing plan for EVs and meet customers' demands within th e specified time windows. However, most studies have been conducted in deterministic situations and neglected the uncertain travel time in reality. Therefore, this study proposes a novel mean–standard-deviation-based EVRPTW model and describes uncertain travel time with the expected travel time, travel time variance, and reliable parameters. Besides, the proposed model considers the partial recharge policy and capacitated recharge stations in an uncertain environment, and proposes a time-discrete state-space-time-energy representation network to better simulate real-world situations. To effectively solve the proposed model, a new Lagrangian relaxation and extended alternating direction method of multipliers (LR–EADMM)-based decomposition algorithm is proposed. In the LR–EADMM, dynamic penalty parameters and subgradient method are employed to update the Lagrangian multipliers, strengthen the algorithmic convergence, and improve the quality of solutions. Furthermore, to validate the performance of the proposed algorithm, several comparison experiments are conducted with the Gurobi optimizer and two baseline algorithms. With reliable parameter 0, the experimental results show that the average upper bounds obtained by LR–EADMM is 0.78% greater than the best-known solutions. With reliable parameters 1 and 2, the average gap between lower and upper bounds obtained by LR–EADMM is 8.08% smaller than that obtained by baseline algorithm in solving small instances, and the average upper bounds obtained by LR–EADMM is, respectively, 3.01% and 5.71% better than those obtained by two baseline algorithms in solving large instances. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
36. Spectral projected subgradient method with a 1-memory momentum term for constrained multiobjective optimization problem.
- Author
-
Wang, Jing-jing, Tang, Li-ping, and Yang, Xin-min
- Subjects
SUBGRADIENT methods ,CONSTRAINED optimization ,SHIFT registers - Abstract
In this paper, we propose a spectral projected subgradient method with a 1-memory momentum term for solving constrained convex multiobjective optimization problem. This method combines the subgradient-type algorithm for multiobjective optimization problems with the idea of the spectral projected algorithm to accelerate the convergence process. Additionally, a 1-memory momentum term is added to the subgradient direction in the early iterations. The 1-memory momentum term incorporates, in the present iteration, some of the influence of the past iterations, and this can help to improve the search direction. Under suitable assumptions, we show that the sequence generated by the method converges to a weakly Pareto efficient solution and derive the sublinear convergence rates for the proposed method. Finally, computational experiments are given to demonstrate the effectiveness of the proposed method. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
37. A modified subgradient extragradient method with non-monotonic step sizes for solving quasimonotone variational inequalities.
- Author
-
Thong, Duong Viet, Li, Xiao-Huan, Dung, Vu Tien, Van Thang, Hoang, and Van Long, Luong
- Subjects
SUBGRADIENT methods ,VARIATIONAL inequalities (Mathematics) ,HILBERT space ,LIPSCHITZ continuity - Abstract
In this work, we propose a self-adaptive projection method for solving variational inequalities with Lipschitz continuous and quasimonotone mapping (or Lipschitz continuous mapping without monotonicity) in real Hilbert space. Using the technique of double inertial steps into a single projection method, we give weak and strong convergence theorems of the proposed algorithm. The results obtained in this paper extend some recent results in the literature. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
38. Modified inertial subgradient extragradient algorithms for generalized equilibria systems with constraints of variational inequalities and fixed points.
- Author
-
Lu-Chuan Ceng, Shih-Hsin Chen, Yeong-Cheng Liou, and Tzu-Chien Yin
- Subjects
VARIATIONAL inequalities (Mathematics) ,SUBGRADIENT methods ,EQUILIBRIUM ,ALGORITHMS - Abstract
In this research, we studied modified inertial composite subgradient extragradient implicit rules for finding solutions of a system of generalized equilibrium problems with a common fixed-point problem and pseudomonotone variational inequality constraints. The suggested methods consisted of an inertial iterative algorithm, a hybrid deepest-descent technique, and a subgradient extragradient method. We proved that the constructed algorithms converge to a solution of the considered problem, which also solved some hierarchical variational inequality. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
39. Game‐based distributed energy‐sharing model for prosumers in microgrids considering carbon emissions and its fast equilibrium‐finding method
- Author
-
Xinze Zheng, Qiang Li, and Juzhong Yuan
- Subjects
energy sharing ,game theory ,microgrids ,prosumers ,subgradient methods ,Technology ,Science - Abstract
Abstract In recent years, energy sharing has attracted a lot of attention. However, the intermediate platforms in centralized energy‐sharing methods cause the rapid growth of communication complexity and the risk of privacy leakage. Unlike complex energy‐sharing market mechanisms, in this paper, a simple and efficient distributed energy sharing for microgrids (MGs) is proposed, where game theory is employed to form flexible prices for prosumers and only local information is used to improve the privacy protection of prosumers. First, prosumers located in different areas are characterized more precisely and a two‐tier carbon emission cost model is built. Next, a game‐theory‐based distributed energy‐sharing model is proposed, where a flexible pricing mechanism is developed to enable prosumers to independently reach price agreements and achieve supply–demand balance within MGs. In the process, optimization models for obtaining equilibrium are formed and only local information is needed. However, solving these optimization models is generally time‐consuming. So, a distributed optimization method based on weighted subgradients is proposed to accelerate the equilibrium‐finding process. Finally, four cases are designed, and simulation results demonstrate that the prosumers' costs of our method are reduced by 7.5%–22.5% compared to the costs obtained by feed‐in tariff. Moreover, in the case of solving the distributed trading model for an MG at a 24‐h time scale, the iteration numbers of our method are only 38.9% and 49.3% of the two traditional solving methods.
- Published
- 2024
- Full Text
- View/download PDF
40. Scheduling of steelmaking-continuous casting process with different processing routes using effective surrogate Lagrangian relaxation approach and improved concave–convex procedure.
- Author
-
Cui, Haijuan, Luo, Xiaochuan, and Wang, Yuan
- Subjects
SUBGRADIENT methods ,LAGRANGIAN functions ,NONLINEAR programming ,SCHEDULING ,NONLINEAR equations - Abstract
This paper studies a steelmaking-continuous casting scheduling problem with different processing routes. We model this problem as a mixed-integer nonlinear programming problem. Next, Lagrangian relaxation approach is introduced to solve this problem by relaxing the coupling constraints. Due to the nonseparability in Lagrangian functions, we design an improved concave–convex procedure to decompose the Lagrangian relaxation problem into three tractable subproblems and analyse the convergence of the improved concave–convex procedure under some assumptions. Furthermore, we present an effective surrogate subgradient algorithm with global convergence to solve the Lagrangian dual problem. Lastly, computational experiments on the practical production data show the effectiveness of the proposed surrogate subgradient method for solving this steelmaking-continuous casting scheduling problem. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
41. Goldstein stationarity in Lipschitz constrained optimization
- Author
-
Grimmer, Benjamin and Jia, Zhichao
- Published
- 2024
- Full Text
- View/download PDF
42. One-Rank Linear Transformations and Fejer-Type Methods: An Overview.
- Author
-
Semenov, Volodymyr, Stetsyuk, Petro, Stovba, Viktor, and Velarde Cantú, José Manuel
- Subjects
- *
SUBGRADIENT methods , *CONVEX functions , *CONVEX programming - Abstract
Subgradient methods are frequently used for optimization problems. However, subgradient techniques are characterized by slow convergence for minimizing ravine convex functions. To accelerate subgradient methods, special linear non-orthogonal transformations of the original space are used. This paper provides an overview of these transformations based on Shor's original idea. Two one-rank linear transformations of Euclidean space are considered. These simple transformations form the basis of variable metric methods for convex minimization that have a natural geometric interpretation in the transformed space. Along with the space transformation, a search direction and a corresponding step size must be defined. Subgradient Fejer-type methods are analyzed to minimize convex functions, and Polyak step size is used for problems with a known optimal objective value. Convergence theorems are provided together with the results of numerical experiments. Directions for future research are discussed. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
43. S-Iteration inertial subgradient extragradient method for variational inequality and fixed point problems.
- Author
-
Alakoya, T. O. and Mewomo, O. T.
- Subjects
- *
SUBGRADIENT methods , *IMAGE reconstruction , *VARIATIONAL inequalities (Mathematics) , *BANACH spaces , *BANDWIDTH allocation , *RESEARCH personnel - Abstract
In developing iterative methods for approximating solutions of optimization problems, one of the major goals of researchers is to construct efficient iterative schemes. Over the years, different techniques have been devised by authors to achieve this goal. In this paper, we study a non-Lipschitz pseudomonotone variational inequality problem with common fixed points constraint of Bregman quasi-nonexpansive mappings. We introduce a new iterative method for approximating the solution of this problem in a more general framework of reflexive Banach spaces. Our method employs several techniques to achieve high level of efficiency. One of the techniques employed is the S-iteration process, a method known to be highly efficient in comparison with several of the well-known methods. Moreover, our proposed algorithm utilizes the inertial technique and a non-monotonic self-adaptive step size to guarantee high rate of convergence and easy implementation. Unlike several of the existing results on variational inequality problem with non-Lipschitz operator, the design of our method does not involve any linesearch technique. We obtain strong convergence result for the proposed algorithm without the sequentially weakly continuity condition often assumed by authors to guarantee convergence when solving pseudomonotone variational inequality problems. Furthermore, we apply our result to study utility-based bandwidth allocation problem and image restoration problem. Finally, we present several numerical experiments to demonstrate the efficiency of our proposed method in comparison with existing state-of-the-art methods. Our result extends and improves several of the recently announced results in this direction in the literature. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
44. Inertial subgradient extragradient method for solving pseudomonotone equilibrium problems and fixed point problems in Hilbert spaces.
- Author
-
Xie, Zhongbing, Cai, Gang, and Tan, Bing
- Subjects
- *
SUBGRADIENT methods , *HILBERT space , *NONEXPANSIVE mappings , *EQUILIBRIUM , *PROBLEM solving - Abstract
This paper proposes a new inertial subgradient extragradient method for solving equilibrium problems with pseudomonotone and Lipschitz-type bifunctions and fixed point problems for nonexpansive mappings in real Hilbert spaces. Precisely, we prove that the sequence generated by proposed algorithm converges strongly to a common solution of equilibrium problems and fixed point problems. We use an effective self-adaptive step size rule to accelerate the convergence process of our proposed iterative algorithm. Moreover, some numerical results are given to show the effectiveness of the proposed algorithm. The results obtained in this paper extend and improve many recent ones in the literature. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
45. Nonsmooth Nonconvex Stochastic Heavy Ball.
- Author
-
Le, Tam
- Subjects
- *
SEMIALGEBRAIC sets , *MACHINE learning , *SUBGRADIENT methods , *DEEP learning , *NONSMOOTH optimization , *DIFFERENTIAL calculus - Abstract
Motivated by the conspicuous use of momentum-based algorithms in deep learning, we study a nonsmooth nonconvex stochastic heavy ball method and show its convergence. Our approach builds upon semialgebraic (definable) assumptions commonly met in practical situations and combines a nonsmooth calculus with a differential inclusion method. Additionally, we provide general conditions for the sample distribution to ensure the convergence of the objective function. Our results are general enough to justify the use of subgradient sampling in modern implementations that heuristically apply rules of differential calculus on nonsmooth functions, such as backpropagation or implicit differentiation. As for the stochastic subgradient method, our analysis highlights that subgradient sampling can make the stochastic heavy ball method converge to artificial critical points. Thanks to the semialgebraic setting, we address this concern showing that these artifacts are almost surely avoided when initializations are randomized, leading the method to converge to Clarke critical points. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
46. Distributed online primal-dual subgradient method on unbalanced directed networks.
- Author
-
Tada, Keishin, Hayashi, Naoki, and Takai, Shigemasa
- Subjects
- *
SUBGRADIENT methods , *ONLINE algorithms , *COST functions , *TELECOMMUNICATION systems , *DIRECTED graphs , *DISTRIBUTED algorithms , *TIME-varying networks - Abstract
In this paper, we investigate a distributed online optimization method on multiagent communication networks. We consider a distributed prime-dual subgradient algorithm for an online convex optimization problem with time-varying coupled constraints. Each agent updates the estimations for the primal and dual optimizers by a consensus-based online algorithm. In the proposed algorithm, the gradient direction is scaled by estimating the left eigenvector of a weight matrix associated with the directed communication network. The scaling procedure enables agents in the network to estimate the time-varying optimal solutions by counterbalancing the unbalanced communication flows. The performance of the proposed algorithm is examined by a dynamic regret and a fit, which evaluate the cumulative error against the time-varying optimal cost function and the constraint function, respectively. We provide a sufficient condition under which both the dynamic regret and the fit are sublinear. A numerical example of an online economic dispatch problem confirms the validity of the proposed method. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
47. QUASI-SUBGRADIENT METHODS WITH BREGMAN DISTANCE FOR QUASI-CONVEX FEASIBILITY PROBLEMS.
- Author
-
YAOHUA HU, JINGCHAO LI, YANYAN LIU, and KWOK WAI YU, CARISA
- Subjects
STOCHASTIC convergence ,SUBGRADIENT methods ,CONSTRAINT databases ,MATHEMATICAL bounds ,FEASIBILITY studies - Abstract
In this paper, we consider the quasi-convex feasibility problem (QFP), which is to find a common point of a family of sublevel sets of quasi-convex functions. By employing the Bregman projection mapping, we propose a unified framework of Bregman quasi-subgradient methods for solving the QFP. This paper is contributed to establish the convergence theory, including the global convergence, iteration complexity, and convergence rates, of the Bregman quasi-subgradient methods with several general control schemes, including the a-most violated constraints control and the s-intermittent control. Moreover, we introduce a notion of the Hölder-type bounded error bound property relative to the Bregman distance for the QFP, and use it to establish the linear (or sublinear) convergence rates for Bregman quasi-subgradient methods to a feasible solution of the QFP. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
48. Double inertial steps extragadient-type methods for solving optimal control and image restoration problems.
- Author
-
Ofem, Austine Efut, Abuchu, Jacob Ashiwere, Ugwunnadi, Godwin Chidi, Nabwey, Hossam A., Adamu, Abubakar, and Narain, Ojen Kumar
- Subjects
SUBGRADIENT methods ,VARIATIONAL inequalities (Mathematics) ,IMAGE reconstruction ,HILBERT space ,NONEXPANSIVE mappings ,PROBLEM solving ,VISCOSITY - Abstract
In order to approximate the common solution of quasi-nonexpansive fixed point and pseudo-monotone variational inequality problems in real Hilbert spaces, this paper presented three new modified sub-gradient extragradient-type methods. Our algorithms incorporated viscosity terms and double inertial extrapolations to ensure strong convergence and to speed up convergence. No line search methods of the Armijo type were required by our algorithms. Instead, they employed a novel self-adaptive step size technique that produced a non-monotonic sequence of step sizes while also correctly incorporating a number of well-known step sizes. The step size was designed to lessen the algorithms' reliance on the initial step size. Numerical tests were performed, and the results showed that our step size is more effective and that it guarantees that our methods require less execution time. We stated and proved the strong convergence of our algorithms under mild conditions imposed on the control parameters. To show the computational advantage of the suggested methods over some wellknown methods in the literature, several numerical experiments were provided. To test the applicability and efficiencies of our methods in solving real-world problems, we utilized the proposed methods to solve optimal control and image restoration problems. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
49. Convergence properties of stochastic proximal subgradient method in solving a class of composite optimization problems with cardinality regularizer.
- Author
-
Hu, Xiaoyin, Liu, Xin, and Xiao, Nachuan
- Subjects
SUBGRADIENT methods ,STOCHASTIC convergence ,NONSMOOTH optimization ,PROBLEM solving - Abstract
In this paper, we study a class of composite optimization problems, whose objective function is the summation of a bunch of nonsmooth nonconvex loss functions and a cardinality regularizer. Firstly we investigate the optimality condition of these problems and then suggest a stochastic proximal subgradient method (SPSG) to solve them. Then we establish the almost surely subsequence convergence of SPSG under mild assumptions. We emphasize that these assumptions are satisfied by a wide range of problems arising from training neural networks. Furthermore, we conduct preliminary numerical experiments to demonstrate the effectiveness and efficiency of SPSG in solving this class of problems. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
50. Non-Quadratic Proxy Functions in Mirror Descent Method Applied to Designing of Robust Controllers for Nonlinear Dynamic Systems with Uncertainty.
- Author
-
Nazin, A. V. and Poznyak, A. S.
- Subjects
- *
NONLINEAR dynamical systems , *COST functions , *SUBGRADIENT methods , *ORDINARY differential equations , *MIRRORS , *SOLAR cycle - Abstract
We consider a class of controlled nonlinear plants, the dynamics of which are governed by a vector system of ordinary differential equations with a right-hand side that is partially known. The study's objective is to construct a robust tracking controller with certain constraints on the state variables, assuming that the state variables and their time derivatives can be observed. The Legendre–Fenchel transform and a chosen proxy function are utilized to develop this mathematical development using the mirror descent approach, which is frequently employed in convex optimization problems involving static objects. The Average Subgradient Method (an improved version of the Subgradient Descent Method), and the Integral Sliding Mode technique for continuous-time control systems are basically extended by the suggested unifying architecture. The primary findings include demonstrating that the "desired regime"—a non-stationary analog of the sliding surface – can be achieved from the very start of the process and getting an explicit upper bound on the cost function's decrement. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.