Back to Search
Start Over
Convergence analysis of optimization-by-continuation algorithms
- Publication Year :
- 2023
-
Abstract
- We discuss several iterative optimization algorithms for the minimization of a cost function consisting of a linear combination of up to three convex terms with at least one differentiable and a second one prox-simple. Such optimization problems frequently occur in the numerical solution of inverse problems (data misfit term plus penalty or constraint term). We present several new results on the convergence of proximal-gradient-like algorithms in the context of a optimization-by-continuation strategy. The algorithms special feature lies in their ability to approximate, in a single iteration run, the minimizers of the cost function for many different values of the parameters determining the relative weight of the three terms in the cost function (penalty parameters). As a special case, one recovers a generalization of the primal-dual algorithm of Chambolle and Pock.<br />info:eu-repo/semantics/nonPublished
Details
- Database :
- OAIster
- Notes :
- 1 full-text file(s): application/pdf, English
- Publication Type :
- Electronic Resource
- Accession number :
- edsoai.on1415716591
- Document Type :
- Electronic Resource