589 results
Search Results
2. A Theory of Alternating Paths and Blossoms from the Perspective of Minimum Length.
- Author
-
Vazirani, Vijay V.
- Subjects
INDEPENDENT sets ,ALGORITHMS - Abstract
The Micali–Vazirani (MV) algorithm for finding a maximum cardinality matching in general graphs, which was published in 1980, remains to this day the most efficient known algorithm for the problem. The current paper gives the first complete and correct proof of this algorithm. The MV algorithm resorts to finding minimum-length augmenting paths. However, such paths fail to satisfy an elementary property, called breadth first search honesty in this paper. In the absence of this property, an exponential time algorithm appears to be called for—just for finding one such path. On the other hand, the MV algorithm accomplishes this and additional tasks in linear time. The saving grace is the various "footholds" offered by the underlying structure, which the algorithm uses in order to perform its key tasks efficiently. The theory expounded in this paper elucidates this rich structure and yields a proof of correctness of the algorithm. It may also be of independent interest as a set of well-knit graph-theoretic facts. Funding: This work was supported in part by the National Science Foundation [Grant CCF-2230414]. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
3. A Localized Progressive Hedging Algorithm for Solving Nonmonotone Stochastic Variational Inequalities.
- Author
-
Cui, Xingbang and Zhang, Liping
- Subjects
LINEAR complementarity problem ,ALGORITHMS - Abstract
The progressive hedging algorithm (PHA) is an effective solution method for solving monotone stochastic variational inequalities (SVIs). However, this validity is based on the assumption of global maximal monotonicity. In this paper, we propose a localized PHA for solving nonmonotone SVIs and show that its validity is based on the weaker assumption of locally elicitable maximal monotonicity. Furthermore, we prove that such assumption holds when the mapping involved in the SVI is locally elicitable monotone or locally monotone. The local convergence of the proposed algorithm is established, and it is shown that the localized PHA has the rate of linear convergence under some mild assumptions. Some numerical experiments, including a two-stage orange market problem and randomly generated two-stage piecewise stochastic linear complementarity problems, indicate that the proposed algorithm is efficient. Funding: This work was supported by the National Natural Science Foundation of China [Grant 12171271]. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
4. Algorithms for Competitive Division of Chores.
- Author
-
Brânzei, Simina and Sandomirskiy, Fedor
- Subjects
CHORES ,POLYNOMIAL time algorithms ,PARETO optimum ,NURSE practitioners ,ALGORITHMS ,SOCIAL services - Abstract
We study the problem of allocating divisible bads (chores) among multiple agents with additive utilities when monetary transfers are not allowed. The competitive rule is known for its remarkable fairness and efficiency properties in the case of goods. This rule was extended to chores by Bogomolnaia, Moulin, Sandomirskiy, and Yanovskaya. For both goods and chores, the rule produces Pareto optimal and envy-free allocations. In the case of goods, the outcome of the competitive rule can be easily computed. Competitive allocations solve the Eisenberg-Gale convex program; hence the outcome is unique and can be approximately found by standard gradient methods. An exact algorithm that runs in polynomial time in the number of agents and goods was given by Orlin. In the case of chores, the competitive rule does not solve any convex optimization problem; instead, competitive allocations correspond to local minima, local maxima, and saddle points of the Nash social welfare on the Pareto frontier of the set of feasible utilities. The Pareto frontier may contain many such points and, consequently, the outcome of the competitive rule is no longer unique. In this paper, we show that all the outcomes of the competitive rule for chores can be computed in strongly polynomial time if either the number of agents or the number of chores is fixed. The approach is based on a combination of three ideas: all consumption graphs of Pareto optimal allocations can be listed in polynomial time; for a given consumption graph, a candidate for a competitive utility profile can be constructed via an explicit formula; each candidate can be checked for competitiveness and the allocation can be reconstructed using a maximum flow computation. Our algorithm immediately gives an approximately-fair allocation of indivisible chores by the rounding technique of Barman and Krishnamurthy. Funding: This work was supported by National Science Foundation (CNS 1518941); Lady Davis Fellowship Trust, Hebrew University of Jerusalem; H2020 European Research Council (740435); Linde Institute at Caltech. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. An Oblivious Ellipsoid Algorithm for Solving a System of (In)Feasible Linear Inequalities.
- Author
-
Lamperski, Jourdain, Freund, Robert M., and Todd, Michael J.
- Subjects
ELLIPSOIDS ,INTERIOR-point methods ,ALGORITHMS ,SIMPLEX algorithm ,COMPUTATIONAL complexity - Abstract
The ellipsoid algorithm is a fundamental algorithm for computing a solution to the system of m linear inequalities in n variables (P):A⊤x≤u when its set of solutions has positive volume. However, when (P) is infeasible, the ellipsoid algorithm has no mechanism for proving that (P) is infeasible. This is in contrast to the other two fundamental algorithms for tackling (P) , namely, the simplex and interior-point methods, each of which can be easily implemented in a way that either produces a solution of (P) or proves that (P) is infeasible by producing a solution to the alternative system (Alt):Aλ=0, u⊤λ<0, λ≥0. This paper develops an oblivious ellipsoid algorithm (OEA) that either produces a solution of (P) or produces a solution of (Alt). Depending on the dimensions and other condition measures, the computational complexity of the basic OEA may be worse than, the same as, or better than that of the standard ellipsoid algorithm. We also present two modified versions of OEA, whose computational complexity is superior to that of OEA when n≪m. This is achieved in the first modified version by proving infeasibility without producing a solution of (Alt) , and in the second version by using more memory. Funding: J. Lamperski and R. M. Freund were supported by the Air Force Office of Scientific Research [Grant FA9550-19-1-0240]. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
6. On the Douglas–Rachford Algorithm for Solving Possibly Inconsistent Optimization Problems.
- Author
-
Bauschke, Heinz H. and Moursi, Walaa M.
- Subjects
CONVEX functions ,ALGORITHMS ,GEOMETRY ,RESEARCH grants - Abstract
More than 40 years ago, Lions and Mercier introduced in a seminal paper the Douglas–Rachford algorithm. Today, this method is well-recognized as a classic and highly successful splitting method to find minimizers of the sum of two (not necessarily smooth) convex functions. Whereas the underlying theory has matured, one case remains a mystery: the behavior of the shadow sequence when the given functions have disjoint domains. Building on previous work, we establish for the first time weak and value convergence of the shadow sequence generated by the Douglas–Rachford algorithm in a setting of unprecedented generality. The weak limit point is shown to solve the associated normal problem, which is a minimal perturbation of the original optimization problem. We also present new results on the geometry of the minimal displacement vector. Funding: The research of H. H. Bauschke and W. M. Moursi was partially supported by Discovery Grants of the Natural Sciences and Engineering Research Council of Canada [Grants RGPIN-2018-03703 and RGPIN-2019-04803], respectively. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
7. Error Analysis of Surrogate Models Constructed Through Operations on Submodels.
- Author
-
Chen, Yiwen, Hare, Warren, and Jarry-Bolduc, Gabriel
- Subjects
ALGORITHMS - Abstract
Model-based methods are popular in derivative-free optimization (DFO). In most of them, a single model function is built to approximate the objective function. This is generally based on the assumption that the objective function is one black box. However, some real-life and theoretical problems show that the objective function may consist of several black boxes. In those problems, the information provided by each black box may not be equal. In this situation, one can build multiple submodels that are then combined to become a final model. In this paper, we analyze the relation between the accuracy of those submodels and the model constructed through their operations. We develop a broad framework that can be used as a theoretical tool in model error analysis and future research in DFO algorithm design. Funding: Y. Chen's research is partially funded by the MITACS Globalink program. All authors research partially supported by NSERC of Canada Discovery [Grant 2018-03865]. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
8. Existence of Approximate Exact Penalty in Constrained Optimization.
- Author
-
Zaslavski, Alexander J.
- Subjects
CONSTRAINED optimization ,MATHEMATICAL optimization ,HARM reduction ,INFINITE dimensional Lie algebras ,ALGORITHMS ,MATHEMATICAL analysis - Abstract
In this paper, we use the penalty approach in order to study constrained minimization problems in infinite dimensional spaces. A penalty function is said to have the exact penalty property if there is a penalty coefficient for which a solution of an unconstrained penalized problem is a solution of the corresponding constrained problem. In this paper, we establish the exact penalty property for a large class of inequality-constrained minimization problems. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
9. ON THE EXISTENCE OF EASY INITIAL STATES FOR UNDISCOUNTED STOCHASTIC GAMES.
- Author
-
Tijs, S. H. and Vrieze, O. J.
- Subjects
GAME theory ,STOCHASTIC processes ,MATHEMATICAL optimization ,MATHEMATICAL analysis ,MATHEMATICS ,SIMULATION methods & models ,ALGORITHMS ,MATHEMATICAL functions ,FUNCTIONALS - Abstract
This paper deals with undiscounted infinite stage two-person zero-sum stochastic games with finite state and action spaces. It was recently shown that such games possess a value. But in general there are no optimal strategies. We prove that for each player there exists a nonempty set of easy initial states, i.e. starting states for which the player possesses an optimal stationary strategy. This result is proved with the aid of facts derived by Bewley and Kohlberg for the limit discount equation for stochastic games. [ABSTRACT FROM AUTHOR]
- Published
- 1986
- Full Text
- View/download PDF
10. An Optimal High-Order Tensor Method for Convex Optimization.
- Author
-
Jiang, Bo, Wang, Haoyue, and Zhang, Shuzhong
- Subjects
CONVEX functions ,ALGORITHMS ,SMOOTHNESS of functions - Abstract
This paper is concerned with finding an optimal algorithm for minimizing a composite convex objective function. The basic setting is that the objective is the sum of two convex functions: the first function is smooth with up to the dth-order derivative information available, and the second function is possibly nonsmooth, but its proximal tensor mappings can be computed approximately in an efficient manner. The problem is to find—in that setting—the best possible (optimal) iteration complexity for convex optimization. Along that line, for the smooth case (without the second nonsmooth part in the objective), Nesterov proposed an optimal algorithm for the first-order methods (d = 1) with iteration complexity O (1 / k 2 ) , whereas high-order tensor algorithms (using up to general dth-order tensor information) with iteration complexity O (1 / k d + 1 ) were recently established. In this paper, we propose a new high-order tensor algorithm for the general composite case, with the iteration complexity of O (1 / k (3 d +1) / 2 ) , which matches the lower bound for the dth-order methods as previously established and hence is optimal. Our approach is based on the accelerated hybrid proximal extragradient (A-HPE) framework proposed by Monteiro and Svaiter, where a bisection procedure is installed for each A-HPE iteration. At each bisection step, a proximal tensor subproblem is approximately solved, and the total number of bisection steps per A-HPE iteration is shown to be bounded by a logarithmic factor in the precision required. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
11. NONHOMOGENOUS MARKOV DECISION PROCESSES WITH BOREL STATE SPACE-- THE AVERAGE CRITERION WITH NONUNIFORMLY BOUNDED REWARDS.
- Author
-
Xianping Guo, Jianyong Liu, and Ke Liu
- Subjects
MARKOV processes ,BOREL sets ,MATHEMATICAL optimization ,ALGORITHMS ,STOCHASTIC processes - Abstract
This paper deals with nonhomogeneous Markov decision processes with Borel state space and nonuniformly bounded rewards under the average criterion. First, under the minorant-type assumption we prove the existence of an appropriate solution to the optimality equations. Second, from the optimality equations we also establish the existence of ε(≥ 0)-optimal Markov policies under the additional conditions. Third, some sufficient conditions for the validity of the assumptions in this paper and several examples such as inventory/production systems are provided. Finally, as an application of the optimality equations, a rolling horizon algorithm is given. [ABSTRACT FROM AUTHOR]
- Published
- 2000
- Full Text
- View/download PDF
12. Provably Efficient Reinforcement Learning with Linear Function Approximation.
- Author
-
Jin, Chi, Yang, Zhuoran, Wang, Zhaoran, and Jordan, Michael I.
- Subjects
REINFORCEMENT learning ,MACHINE learning ,POLYNOMIAL time algorithms ,MILITARY research ,ALGORITHMS - Abstract
Modern reinforcement learning (RL) is commonly applied to practical problems with an enormous number of states, where function approximation must be deployed to approximate either the value function or the policy. The introduction of function approximation raises a fundamental set of challenges involving computational and statistical efficiency, especially given the need to manage the exploration/exploitation trade-off. As a result, a core RL question remains open: how can we design provably efficient RL algorithms that incorporate function approximation? This question persists even in a basic setting with linear dynamics and linear rewards, for which only linear function approximation is needed. This paper presents the first provable RL algorithm with both polynomial run time and polynomial sample complexity in this linear setting, without requiring a "simulator" or additional assumptions. Concretely, we prove that an optimistic modification of least-squares value iteration—a classical algorithm frequently studied in the linear setting—achieves O ˜ ( d 3 H 3 T) regret, where d is the ambient dimension of feature space, H is the length of each episode, and T is the total number of steps. Importantly, such regret is independent of the number of states and actions. Funding: This work was supported by the Defense Advanced Research Projects Agency program on Lifelong Learning Machines. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
13. Solving Optimal Stopping Problems via Randomization and Empirical Dual Optimization.
- Author
-
Belomestny, Denis, Bender, Christian, and Schoenmakers, John
- Subjects
LINEAR programming ,STOCHASTIC approximation ,ALGORITHMS - Abstract
In this paper, we consider optimal stopping problems in their dual form. In this way, the optimal stopping problem can be reformulated as a problem of stochastic average approximation (SAA) that can be solved via linear programming. By randomizing the initial value of the underlying process, we enforce solutions with zero variance while preserving the linear programming structure of the problem. A careful analysis of the randomized SAA algorithm shows that it enjoys favorable properties such as faster convergence rates and reduced complexity compared with the nonrandomized procedure. We illustrate the performance of our algorithm on several benchmark examples. Funding: This work was supported by the Deutsche Forschungsgemeinschaft via the MATH+ Cluster of Excellence [Project AA4-2]. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
14. LEARNING ALGORITHMS FOR TWO-PERSON ZERO-SUM STOCHASTIC GAMES WITH INCOMPLETE INFORMATION.
- Author
-
Lakshmivarahan, S. and Narendra, Kumpati S.
- Subjects
ALGORITHMS ,GAME theory ,STOCHASTIC processes - Abstract
This paper investigates conditions under which two learning algorithms playing a zero-sum sequential stochastic game would arrive at optimal pure strategies. Neither player has knowledge of either the pay-off matrix or the choice of strategies available to the other and both players update their own strategies at every stage entirely on the basis of the random outcome at that stage. The proposed learning algorithms are shown to converge to the optimal pure strategies when they exist with probabilities as close to ! as desired. [ABSTRACT FROM AUTHOR]
- Published
- 1981
- Full Text
- View/download PDF
15. Maximum-Stopping-Value Policies in Finite Markov Population Decision Chains.
- Author
-
Eaves, B. Curtis and Veinott Jr., Arthur F.
- Subjects
MARKOV chain Monte Carlo ,VALUE engineering ,NETWORK effect ,POLYNOMIALS ,ALGORITHMS - Abstract
This paper considers infinite-horizon finite state-and-action Markov population decision chains (MPDCs) in which the goal is to find a stationary stopping policy with maximum stopping value, that is, with maximum value over all deterministic Markov stopping policies. A policy is stopping if the resulting expected population size in a period diminishes to zero as the period converges to infinity. The paper shows that the following are equivalent: (a) there is a stationary maximum-stopping value policy; (b) the maximum stopping value is finite; (c) there is a stopping policy and an excessive point of the optimal return operator; and (d) the maximum stopping value is the least excessive (resp., fixed) point of the optimal return operator. The paper shows how to use linear programming, policy improvement and successive approximations to solve the problem. The problem arises in stopping a Markov chain, as in optimally eradicating a pest or disease. The problem is one of two key subproblems used repeatedly to find Blackwell and Cesàro-overtaking optimal policies in finite Markov decision chains (MDCs), both of which, with their generalizations, have a vast number of applications. The problem for MDCs and/or MPDCs has been studied under various conditions, and over the years, by many investigators including Bellman, Bertsekas, Blackwell, Dantzig, Denardo, d‘Epenoux, Derman, Dynkin, Eaton, Erickson, Ford, Hordijk, Howard, Kallenberg, Manne, O‘Sullivan, Rothblum, Shoemaker, Strauch, Tsitsiklis, Veinott, Wagner, and Zadeh. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
16. Approximation Algorithms for D-optimal Design.
- Author
-
Singh, Mohit and Xie, Weijun
- Subjects
APPROXIMATION algorithms ,RANDOM noise theory ,ALGORITHMS ,LENGTH measurement ,EXPERIMENTAL design - Abstract
Experimental design is a classical statistics problem, and its aim is to estimate an unknown vector from linear measurements where a Gaussian noise is introduced in each measurement. For the combinatorial experimental design problem, the goal is to pick a subset of experiments so as to make the most accurate estimate of the unknown parameters. In this paper, we will study one of the most robust measures of error estimation—the D-optimality criterion, which corresponds to minimizing the volume of the confidence ellipsoid for the estimation error. The problem gives rise to two natural variants depending on whether repetitions of experiments are allowed or not. We first propose an approximation algorithm with a 1/e-approximation for the D-optimal design problem with and without repetitions, giving the first constant-factor approximation for the problem. We then analyze another sampling approximation algorithm and prove that it is asymptotically optimal. Finally, for D-optimal design with repetitions, we study a different algorithm proposed by the literature and show that it can improve this asymptotic approximation ratio. All the sampling algorithms studied in this paper are shown to admit polynomial-time deterministic implementations. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
17. Computing B-Stationary Points of Nonsmooth DC Programs.
- Author
-
Pang, Jong-Shi, Razaviyayn, Meisam, and Alvarado, Alberth
- Subjects
NONSMOOTH optimization ,STATIONARY processes ,DIGITAL communications ,ALGORITHMS ,STOCHASTIC convergence - Abstract
Motivated by a class of applied problems arising from physical layer based security in a digital communication system, in particular, by a secrecy sum-rate maximization problem, this paper studies a nonsmooth, difference-of-convex (dc) minimization problem. The contributions of this paper are (i) clarify several kinds of stationary solutions and their relations; (ii) develop and establish the convergence of a novel algorithm for computing a d-stationary solution of a problem with a convex feasible set that is arguably the sharpest kind among the various stationary solutions; (iii) extend the algorithm in several directions including a randomized choice of the subproblems that could help the practical convergence of the algorithm, a distributed penalty approach for problems whose objective functions are sums of dc functions, and problems with a specially structured (nonconvex) dc constraint. For the latter class of problems, a pointwise Slater constraint qualification is introduced that facilitates the verification and computation of a B(ouligand)-stationary point. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
18. Constant Approximation Algorithm for Nonuniform Capacitated Multi-Item Lot Sizing via Strong Covering Inequalities.
- Author
-
Li, Shi
- Subjects
APPROXIMATION algorithms ,INVENTORY costs ,LINEAR programming ,ALGORITHMS ,MATHEMATICAL equivalence - Abstract
We study the nonuniform capacitated multi-item lot-sizing problem. In this problem, there is a set of demands over a planning horizon of T discrete time periods, and all demands must be satisfied on time. We can place an order at the beginning of each period s, incurring an ordering cost K
s . In this order, we can order up to Cs units of products. On the other hand, carrying inventory from time to time incurs an inventory holding cost. The goal of the problem is to find a feasible solution that minimizes the sum of ordering and holding costs. Levi et al. [Levi R, Lodi A, Sviridenko M (2008) Approximation algorithms for the capacitated multi-item lot-sizing problem via flow-cover inequalities. Math. Oper. Res. 33(2):461–474.] gave a two-approximation for the problem when the capacities Cs are the same. Extending the result to the case of nonuniform capacities requires new techniques as pointed out in the discussion section of their paper. In this paper, we solve the problem by giving a 10-approximation algorithm for the capacitated multi-item lot-sizing problem with general capacities. The constant approximation is achieved by adding an exponential number of new covering inequalities to the natural facility location–type linear programming (LP) relaxation for the problem. Along the way of our algorithm, we reduce the lot-sizing problem to two generalizations of the classic knapsack-covering problem. We give LP-based constant approximation algorithms for both generalizations via the iterative rounding technique. [ABSTRACT FROM AUTHOR]- Published
- 2020
- Full Text
- View/download PDF
19. On the Undecidability of Computing Stationary Distributions and Large Deviation Rates for Constrained Random Walks.
- Author
-
Gamarnik, David
- Subjects
RANDOM walks ,STOCHASTIC processes ,NUMERICAL solutions for Markov processes ,LYAPUNOV functions ,ALGORITHMS ,DEVIATION (Statistics) - Abstract
We consider a constrained homogeneous random walk in ℤ
+ d . Such random walks are used to model various stochastic processes, most importantly multiclass Markovian queueing networks operating under state-dependent scheduling policies. These applications motivate the need to compute various performance measures, including stationary probability distributions and large deviations rates. In this paper, we show that computing the stationary probability distributions exactly is an algorithmically undecidable problem. That is no algorithm can possibly exist to solve this task. The problem remains undecidable even if a linear Lyapunov function that verifies positive recurrence of the underlying random walk is available as a part of the input. We then prove that computing large deviation rates for this model is also an undecidable problem. Specifically, we show that it is an undecidable problem to determine finiteness of large deviations rates. [ABSTRACT FROM AUTHOR]- Published
- 2007
- Full Text
- View/download PDF
20. A Semidefinite Relaxation Method for Partially Symmetric Tensor Decomposition.
- Author
-
Ni, Guyan and Li, Ying
- Subjects
HOMOGENEOUS polynomials ,ALGORITHMS - Abstract
In this paper, we establish an equivalence relation between partially symmetric tensors and homogeneous polynomials, prove that every partially symmetric tensor has a partially symmetric canonical polyadic (CP)-decomposition, and present three semidefinite relaxation algorithms. The first algorithm is used to check whether there exists a positive partially symmetric real CP-decomposition for a partially symmetric real tensor and give a decomposition if it has. The second algorithm is used to compute general partial symmetric real CP-decompositions. The third algorithm is used to compute positive partially symmetric complex CP-decomposition of partially symmetric complex tensors. Because for different parameters s , m i , n i , partially symmetric tensors T ∈ S [ m ] F [ n ] represent different kinds of tensors. Hence, the proposed algorithms can be used to compute different types of tensor real/complex CP-decomposition, including general nonsymmetric CP-decomposition, positive symmetric CP-decomposition, positive partially symmetric CP-decomposition, general partially symmetric CP-decomposition, etc. Numerical examples show that the algorithms are effective. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
21. PRIMAL-DUAL AFFINE-SCALING ALGORITHMS FAIL FOR SEMIDEFINITE PROGRAMMING.
- Author
-
Muramatsu, Masakazu and Vanderbei, Robert J.
- Subjects
ALGORITHMS ,MATHEMATICAL programming - Abstract
In this paper, we give an example of a semidefinite programming problem in which primal-dual affine-scaling algorithms using the HRVW/KSH/M, MT, and AHO directions fail. We prove that each of these algorithms can generate a sequence converging to a non-optimal solution and that, for the AHO direction, even its associated continuous trajectory can converge to a non-optimal point. In contrast with these directions, we show that the primal-dual affine-scaling algorithm using the NT direction for the same semidefinite programming problem always generates a sequence converging to the optimal solution. Both primal and dual problems have interior feasible solutions and unique optimal solutions which satisfy strict complementarity, and are nondegenerate everywhere. [ABSTRACT FROM AUTHOR]
- Published
- 1999
- Full Text
- View/download PDF
22. STABLE EXTENSIVE GAME FORMS.
- Author
-
Ichiishi, Tatsuro
- Subjects
GAME theory ,MATHEMATICAL models ,CONVEX programming ,MATHEMATICAL programming ,ALGORITHMS ,CONVEX geometry ,GEOMETRY ,CONVEX domains ,GEOMETRIC tomography - Abstract
Peleg established a fundamental theorem which says that convex effectivity functions are stable. He also proved its partial converse: Within the class of maximal effectivity functions, stability implies convexity. The present paper provides another partial converse: Within the class of α-effectivity functions derived from extensive game forms (without chance moves), stability implies convexity. It also provides a geometric characterization of the stability condition for the latter class. [ABSTRACT FROM AUTHOR]
- Published
- 1987
- Full Text
- View/download PDF
23. RANDOM BINARY SEARCH: A RANDOMIZING ALGORITHM FOR GLOBAL OPTIMIZATION IN R.
- Author
-
Zemel, Eitan
- Subjects
ALGORITHMS ,MATHEMATICAL optimization ,STOCHASTIC processes ,ITERATIVE methods (Mathematics) ,DISTRIBUTION (Probability theory) ,PROBABILITY theory ,STATISTICAL sampling ,MATHEMATICS ,OPERATIONS research - Abstract
Randomizing (stochastic) global optimization algorithms can be viewed as sampling procedures, where at each iteration a local optimum is sampled from the set of local optima. The effectiveness of such an algorithm depends crucially on the sampling distribution, i.e., on the probabilities of sampling the various local optima. There exists a very large body of research on stochastic global optimization. However, very little is known about the sampling distribution itself and, in particular, on the specific way in which it depends on the function being optimized, on the region in which the search takes place, and on the optimization routine. In this paper we set out to examine these issues in some detail and present some first results in this direction. The case analyzed in this paper is the optimization of a one-dimensional function using a randomizing version of binary search. We give an effective (computable in quadratic time) formula for the sampling distribution and obtain various interesting properties of this distribution which are relevant to the behavior of the algorithm in practice. We also discuss some issues related to adaptive search. [ABSTRACT FROM AUTHOR]
- Published
- 1986
- Full Text
- View/download PDF
24. A FIXED POINT APPROACH TO CERTAIN CONVEX PROGRAMS WITH APPLICATIONS IN STOCHASTIC PROGRAMMING.
- Author
-
Fukushima, Masao
- Subjects
CONVEX programming ,STOCHASTIC processes ,ALGORITHMS - Abstract
This paper characterizes a certain convex program, which often arises in connection with stochastic programming as a fixed point problem, and explores the possibility of solving it by applying simplicial algorithms for finding fixed points of point-to-set mappings. We first establish equivalence between the convex program and the fixed point problem for some point-to-set mapping. As this mapping may have unfavorable properties from a computational viewpoint, we then modify it to reconstruct a point-to-set mapping and consider the application of a simplicial fixed point algorithm to the latter mapping. It is shown that the algorithm converges to a fixed point of the mapping and yields an optimal solution to the original convex program under certain conditions. [ABSTRACT FROM AUTHOR]
- Published
- 1983
- Full Text
- View/download PDF
25. A COLUMN GENERATION TECHNIQUE FOR THE COMPUTATION OF STATIONARY POINTS.
- Author
-
Jong-Shi Pang
- Subjects
ALGORITHMS ,POLYHEDRAL functions ,EQUATIONS - Abstract
In two recent papers, Eaves showed that Lemke's algorithm can be used to compute a stationary point of an affine function over a polyhedral set. This paper proposes an alternative method which is based on parametric principal pivoting. The proposed method involves solving systems of linear equations and parametric linear subprograms over the given polyhedral set. An obvious advantage of the method is that any special structure of the polyhedral set can be exploited profitably in the solution of the subprograms. [ABSTRACT FROM AUTHOR]
- Published
- 1981
- Full Text
- View/download PDF
26. ON PATHS GENERATED BY FIXED POINT ALGORITHMS.
- Author
-
Saigal, R.
- Subjects
ALGORITHMS ,MATHEMATICAL mappings - Abstract
This paper considers algorithms that compute fixed points (or more generally solve equations) which are based on complementary pivoting. These algorithms, start,ag with a map ƒ[sup0] and its fixed point x[sub0] deform ƒ[supt] to ƒ[sup∞] = ƒ as t goes from 0 to &infin, and follow the path x[subt] of fixed points of ƒ[supt]. In this paper we study These paths. In particular, we treat some special applications of these algorithms from a global point of view. and thus look at the behavior and properties of these paths sufficiently far from the solution. Our methods are motivated by methods of global analysis. We show that in many implementations of these algorithms the path can he specified. These include the cases when the mapping is linear and when it is smooth and monotone. [he results of the linear analysis are used to study the local behavior of these paths (i.e., properties and behavior sufficiently dose to the solution) for smooth mappings. An important implication of this study is that the paths can be modified and thus the work done by these algorithms can be controlled. We also show, for smooth mappings, how our results can be implemented to increase the efficiency of some standard algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 1976
- Full Text
- View/download PDF
27. Extrapolated Proximal Subgradient Algorithms for Nonconvex and Nonsmooth Fractional Programs.
- Author
-
Boţ, Radu Ioan, Dao, Minh N., and Li, Guoyin
- Subjects
SIGNAL reconstruction ,RAYLEIGH quotient ,CONVEX functions ,SIGNAL processing ,ALGORITHMS - Abstract
In this paper, we consider a broad class of nonsmooth and nonconvex fractional programs, which encompass many important modern optimization problems arising from diverse areas such as the recently proposed scale-invariant sparse signal reconstruction problem in signal processing. We propose a proximal subgradient algorithm with extrapolations for solving this optimization model and show that the iterated sequence generated by the algorithm is bounded and that any one of its limit points is a stationary point of the model problem. The choice of our extrapolation parameter is flexible and includes the popular extrapolation parameter adopted in the restarted fast iterative shrinking-threshold algorithm (FISTA). By providing a unified analysis framework of descent methods, we establish the convergence of the full sequence under the assumption that a suitable merit function satisfies the Kurdyka–Łojasiewicz property. Our algorithm exhibits linear convergence for the scale-invariant sparse signal reconstruction problem and the Rayleigh quotient problem over spherical constraint. When the denominator is the maximum of finitely many continuously differentiable weakly convex functions, we also propose another extrapolated proximal subgradient algorithm with guaranteed convergence to a stronger notion of stationary points of the model problem. Finally, we illustrate the proposed methods by both analytical and simulated numerical examples. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
28. Probability Bounds for Polynomial Functions in Random Variables.
- Author
-
Simai He, Bo Jiang, Zhening Li, and Shuzhong Zhang
- Subjects
POLYNOMIALS ,ALGORITHMS ,MULTIVARIATE analysis ,TENSOR algebra ,BERNOULLI polynomials - Abstract
Random sampling is a simple but powerful method in statistics and in the design of randomized algorithms. In a typical application, random sampling can be applied to estimate an extreme value, say maximum, of a function f over a set S⊆R. To do so, one may select a simpler (even finite) subset S0 S, randomly take some samples over S
0 for a number of times, and pick the best sample. The hope is to find a good approximate solution with reasonable chance. This paper sets out to present a number of scenarios for f , S and S0 where certain probability bounds can be established, leading to a quality assurance of the procedure. In our setting, f is a multivariate polynomial function. We prove that if f is a d-th order homogeneous polynomial in n variables and F is its corresponding super-symmetric tensor, and .i (i D 11 21 : : : 1 n) are i.i.d. Bernoulli random variables taking 1 or .1 with equal probability, then Prob8f 4.11 .21 : : : 1 .n5 .n.d=2.F .19 ., where .1 . > 0 are two universal constants and . .1 denotes the summation of the absolute values of all its entries. Several new inequalities concerning probabilities of the above nature are presented in this paper. Moreover, we show that the bounds are tight in most cases. Applications of our results in optimization are discussed as well. [ABSTRACT FROM AUTHOR]- Published
- 2014
- Full Text
- View/download PDF
29. Fluid Limits for Bandwidth-Sharing Networks with Rate Constraints.
- Author
-
Remerova, Maria, Reed, Josh, and Zwart, Bert
- Subjects
BANDWIDTHS ,POLYNOMIALS ,PARTIAL differential equations ,ALGORITHMS ,INTEGERS - Abstract
Bandwidth-sharing networks as introduced by Roberts and Massoulié [Roberts JW, Massoulié L (1998) Bandwidth sharing and admission control for elastic traffic. Proc. ITC Specialist Seminar, Yokohama, Japan], Massoulié and Roberts [Massoulié L, Roberts JW (1999) Bandwidth sharing: Objectives and algorithms. Proc. IEEE Infocom. (Books in Statistics, New York), 1395-1403] model the dynamic interaction among an evolving population of elastic flows competing for several links. With policies based on optimization procedures, such models are of interest both from a queueing theory and operations research perspective. In the present paper, we focus on bandwidth-sharing networks with capacities and arrival rates of a large order of magnitude compared to transfer rates of individual flows. This regime is standard in practice. In particular, we extend previous work by Reed and Zwart [Reed J, Zwart B (2010) Limit theorems for bandwidth-sharing networks with rate constraints. Revised, preprint http://people.stern.nyu.edu/jreed/Papers/BARevised.pdf] on fluid approximations for such networks: we allow interarrival times, flow sizes, and patient times (i.e., abandonment times measured from the arrival epochs) to be generally distributed, rather than exponentially distributed. We also develop polynomial-time computable fixed-point approximations for stationary distributions of bandwidth-sharing networks, and suggest new techniques for deriving these types of results. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
30. A New Approach to the Pareto Stable Matching Problem.
- Author
-
Kamiyama, Naoyuki
- Subjects
POLYNOMIALS ,ALGORITHMS ,MATRICES (Mathematics) ,EQUATIONS ,MATHEMATICAL models - Abstract
In two-sided matching markets, the concept of stability proposed by Gale and Shapley is one of the most important solution concepts. In this paper, we consider a problem related to stability of a matching in a two-sided matching market with indifferences. It is known that stability does not guarantee Pareto efficiency in a two-sided matching market with indifferences. However, Erdil and Ergin proved that there always exists a stable and Pareto efficient matching in a many-to-one matching market with indifferences and gave a polynomial-time algorithm for finding it. Later on, Chen proved that there always exists a stable and Pareto efficient matching in a many-to-many matching market with indifferences and gave a polynomial-time algorithm for finding it. In this paper, we propose a new approach to the problem of finding a stable and Pareto efficient matching in a many-to-many matching market with indifferences. Our algorithm is an alternative proof of the existence of a stable and Pareto efficient matching in a many-to-many matching market with indifferences. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
31. The Power of Preemption on Unrelated Machines and Applications to Scheduling Orders.
- Author
-
Correa, José R., Skutella, Martin, and Verschae, José
- Subjects
PRODUCTION scheduling ,SCHEDULING ,INDUSTRIAL equipment ,MACHINERY ,ALGORITHMS ,APPROXIMATION theory - Abstract
Scheduling jobs on unrelated parallel machines so as to minimize makespan is one of the basic problems in the area of machine scheduling. In the first part of the paper, we prove that the power of preemption, i.e., the worst-case ratio between the makespan of an optimal nonpreemptive and that of an optimal preemptive schedule, is at least 4. This matches the upper bound proposed in Lin and Vitter [Lin, J.-H., J. S. Vitter. 1992. ε-approximations with minimum packing constraint violation. Proc. 24th Annual ACM Sympos. Theory of Comput. (STOC), ACM, New York, 771-782] two decades ago. In the second part of the paper, we consider the more general setting in which orders, consisting of several jobs, have to be processed on unrelated parallel machines so as to minimize the sum of weighted completion times of the orders. We obtain the first constant factor approximation algorithms for the preemptive and nonpreemptive cases, improving and extending a recent result by Leung et al. [Leung, J., H. Li, M. Pinedo, J. Zhang. 2007. Minimizing total weighted completion time when scheduling orders in a flexible environment with uniform machines. Inform. Processing Lett. 103 119-129]. Finally, we study this problem in a parallel machine environment, obtaining a polynomial-time approximation scheme for several special cases. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
32. Splitting Randomized Stationary Policies in Total-Reward Markov Decision Processes.
- Author
-
Feinberg, Eugene A. and Rothblum, Uriel G.
- Subjects
MARKOV processes ,STOCHASTIC processes ,ALGORITHMS ,MARKOV operators ,HAMILTONIAN systems - Abstract
This paper studies a discrete-time total-reward Markov decision process (MDP) with a given initial state distribution. A (randomized) stationary policy can be split on a given set of states if the occupancy measure of this policy can be expressed as a convex combination of the occupancy measures of stationary policies, each selecting deterministic actions on the given set and coinciding with the original stationary policy outside of this set. For a stationary policy, necessary and sufficient conditions are provided for splitting it at a single state as well as sufficient conditions for splitting it on the whole state space. These results are applied to constrained MDPs. The results are refined for absorbing (including discounted) MDPs with finite state and actions spaces. In particular, this paper provides an efficient algorithm that presents the occupancy measure of a given policy as a convex combination of the occupancy measures of finitely many (stationary) deterministic policies. This algorithm generates the splitting policies in a way that each pair of consecutive policies differs at exactly one state. The results are applied to constrained problems to efficiently compute an optimal policy by computing and splitting a stationary optimal policy. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
33. On the Approximability of Single-Machine Scheduling with Precedence Constraints.
- Author
-
Ambühl, Christoph, Mastrolilli, Monaldo, Mutsanas, Nikolaus, and Svensson, Ola
- Subjects
PRODUCTION scheduling ,APPROXIMATION algorithms ,HEURISTIC algorithms ,SCHEDULING ,ALGORITHMS - Abstract
We consider the single-machine scheduling problem to minimize the weighted sum of completion times under precedence constraints. In a series of recent papers, it was established that this scheduling problem is a special case of minimum weighted vertex cover. In this paper, we show that the vertex cover graph associated with the scheduling problem is exactly the graph of incomparable pairs defined in the dimension theory of partial orders. Exploiting this relationship allows us to present a framework for obtaining (2-2/f) -approximation algorithms, provided that the set of precedence constraints has fractional dimension of at most f . Our approach yields the best-known approximation ratios for all previously considered special classes of precedence constraints, and it provides the first results for bounded degree and orders of interval dimension 2. On the negative side, we show that the addressed problem remains-hard even when restricted to the special case of NP interval orders. Furthermore, we prove that the general problem, if a fixed cost present in all feasible schedules is ignored, becomes as hard to approximate as vertex cover. We conclude by giving the first inapproximability result for this problem, showing under a widely believed assumption that it does not admit a polynomial-time approximation scheme. [ABSTRACT FROM AUTHOR]
- Published
- 2011
34. A Plant Location Guide for the Unsure: Approximation Algorithms for Min-Max Location Problems.
- Author
-
Anthony, Barbara, Goyal, Vineet, Gupta, Anupam, and Nagarajan, Viswanath
- Subjects
ALGORITHMS ,LOCATION problems (Programming) ,APPROXIMATION theory ,STOCHASTIC approximation ,ROBUST optimization ,METRIC spaces - Abstract
This paper studies an extension of the k-median problem under uncertain demand. We are given an n-vertex metric space (V, d) and m client sets {S
i ⊆ V}m i=1 . The goal is to open a set of k facilities F such that the worst-case connection cost over all the client sets is minimized, i.e., min max F⊆V, |F| = k i∈[m]{∑ d(j, F) j∈Si }, where for any F ⊆ V , d(j, F) =minf∈F d(j, f). This is a "min-max" or "robust" version of the k-median problem. Note that in contrast to the recent papers on robust and stochastic problems, we have only one stage of decision-making where we select a set of k facilities to open. Once a set of open facilities is fixed, each client in the uncertain client-set connects to the closest open facility. We present a simple, combinatorial O(log n+log m)-approximation algorithm for the robust k-median problem that is based on reweighting/Lagrangean-relaxation ideas. In fact, we give a general framework for (minimization) k-facility location problems where there is a bound on the number of open facilities. We show that if the location problem satisfies a certain "projection" property, then both the robust and stochastic versions of the location problem admit approximation algorithms with logarithmic ratios. We use our framework to give the first approximation algorithms for robust and stochastic versions of several location problems such as k-tree, capacitated k-median, and fault-tolerant k-median. [ABSTRACT FROM AUTHOR]- Published
- 2010
- Full Text
- View/download PDF
35. Approximation Algorithms for Combinatorial Auctions with Complement-Free Bidders.
- Author
-
Dobzinski, Shahar, Nisan, Noam, and Schapira, Michael
- Subjects
APPROXIMATION theory ,ALGORITHMS ,BIDDERS ,AUCTIONS ,PUBLIC welfare - Abstract
In a combinatorial auction m heterogenous indivisible items are sold to n bidders. This paper considers settings in which the valuation functions of the bidders are known to be complement free (a.k.a. subadditive). We provide several approximation algorithms for the social-welfare maximization problem in such settings. First, we present a logarithmic upper bound for the case that the access to the valuation functions is via demand queries. For the weaker value queries model we provide a tight O(√ m) approximation. Unlike the other algorithms we present, this algorithm is also incentive compatible. Finally, we present two approximation algorithms for the more restricted class of XOS valuations: A simple deterministic algorithm that provides an approximation ratio of two and an optimal e/(e-1) approximation achieved via randomized rounding. We also present optimal lower bounds for both the demand oracles model and the value oracles model. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
36. A 2-Approximation Algorithm for Stochastic Inventory Control Models with Lost Sales.
- Author
-
Levi, Retsef, Janakiraman, Ganesh, and Nagarajan, Mahesh
- Subjects
INVENTORIES ,STOCHASTIC analysis ,DIRECT costing ,APPROXIMATION theory ,ALGORITHMS ,SALES - Abstract
In this paper, we describe the first computationally efficient policies for stochastic inventory models with lost sales and replenishment lead times that admit worst-case performance guarantees. In particular, we introduce dual-balancing policies for lost-sales models that are conceptually similar to dual balancing policies recently introduced for a broad class of inventory models in which demand is backlogged rather than lost. That is, in each period, we balance two opposing costs: the expected marginal holding costs against the expected marginal lost-sales cost. Specifically. we show that the dual-balancing policies for the lost-sales models provide a worst-case performance guarantee of two under relatively general demand structures. In particular, the guarantee holds for independent (not necessarily identically distributed) demands and for models with correlated demands such as the AR(1) model and the multiplicative autoregressive demand model. The policies and the worst-case guarantee extend to models with capacity constraints on the size of the order and stochastic lead times. Our analysis has several novel elements beyond the balancing ideas for backorder models. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
37. Approximation Algorithms for Stochastic Inventory Control Models.
- Author
-
Levi, Retsef, Pál, Martin, Roundy, Robin O., and Shmoys, David B.
- Subjects
STOCHASTIC systems ,COMMODITY options ,INVENTORY control ,SUPPLY chain management ,COST accounting ,INVENTORY accounting ,ALGORITHMS - Abstract
We consider two classical stochastic inventory control models, the periodic-review stochastic inventory control problem and the stochastic lot-sizing problem. The goal is to coordinate a sequence of orders of a single commodity, aiming to supply stochastic demands over a discrete, finite horizon with minimum expected overall ordering, holding, and backlogging costs. In this paper, we address the important problem of finding computationally efficient and provably good inventory control policies for these models in the presence of correlated, nonstationary (time-dependent), and evolving stochastic demands. This problem arises in many domains and has many practical applications in supply chain management. Our approach is based on a new marginal cost accounting scheme for stochastic inventory control models combined with novel cost-balancing techniques. Specifically, in each period, we balance the expected cost of overordering (i.e., costs incurred by excess inventory) against the expected cost of underordering (i.e., costs incurred by not satisfying demand on time). This leads to what we believe to be the first computationally efficient policies with constant worst-case performance guarantees for a general class of important stochastic inventory control models. That is, there exists a constant C such that, for any instance of the problem, the expected cost of the policy is at most C times the expected cost of an optimal policy. In particular, we provide a worst-case guarantee of two for the periodic-review stochastic inventory control problem and a worst-case guarantee of three for the stochastic lot-sizing problem. Our results are valid for all of the currently known approaches in the literature to model correlation and nonstationarity of demands over time. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
38. Uniqueness and Stability of Optimal Policies of Finite State Markov Decision Processes.
- Author
-
Leizarowitz, Arie and Zaslavski, Alexander J.
- Subjects
DISCRETE-time systems ,MARKOV processes ,EQUATIONS ,ALGORITHMS ,MULTIDISCIPLINARY practices - Abstract
In this paper we consider infinite horizon discrete-time optimal control of Markov decision processes (MDPs) with finite state spaces and compact action sets. We restrict attention to unicost MDPs, which form a class that contains all the weakly communicating MDPs. The unicost MDPs are characterized as those MDPs for which there exists a solution to the single optimality equation. We address the problem of uniqueness and stability of minimizing Markov actions. Our main result asserts that when we endow the set of unicost MDPs with a certain natural metric, under which it is complete, then the class of MDPs with essentially unique and stable minimizing Markov actions contains the intersection of countably many open dense sets (hence is itself dense). Thus, the property of having essentially unique and stable minimizing Markov actions is generic for unicost MDPs. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
39. Approximation Algorithms for the Job Interval Selection Problem and Related Scheduling Problems.
- Author
-
Chuzhoy, Julia, Ostrovsky, Rafail, and Rabani, Yuval
- Subjects
ALGORITHMS ,PRODUCTION scheduling ,MACHINERY ,MANUFACTURING processes - Abstract
In this paper we consider the job interval selection problem (JISP), a simple scheduling model with a rich history mad numerous applications. Special cases of this problem include the so-called real-time scheduling problem (also known as the throughput maximization problem) in single- and multiple-machine environments. In these special cases we have to maximize the number of jobs scheduled between their release date and deadline (preemption is not allowed). Even the stogie-machine case is NP-hard. The unrelated machines case, as well as other special eases of JISP, are MAX SNP-hard. A simple greedy algorithm gives a two-approximation for JISP. Despite many efforts, this was the hest approximation guarantee known, even for throughput maximization on a single machine. In this paper, we break this barrier and show an approximation guarantee of less than 1.582 for arbitrary instances of JISP. For some special cases, we show better results. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
40. Auction Algorithms for Market Equilibrium.
- Author
-
Garg, Rahul and Kapoor, Sanjiv
- Subjects
MARKET equilibrium ,ECONOMIC equilibrium ,MARKETS ,SUPPLY-side economics ,ALGORITHMS - Abstract
In this paper we study algorithms for computing market equilibrium in markets with linear utility functions. The buyers in the market have an initial endowment given by a portfolio of goods. The market equilibrium problem is to compute a price vector that ensures market clearing, i.e., the demand of a positively priced good equals its supply, and given the prices, each buyer maximizes its utility. The problem is of considerable interest in economics. This paper presents a formulation of the market equilibrium problem as a parameterized linear program. We construct a family of duals corresponding to these parameterized linear programs and show that finding the market equilibrium is the same as finding a linear program from this family of linear programs. The market-clearing conditions arise naturally from complementary slackness conditions. We then define an auction mechanism that computes prices such that approximate market clearing is achieved. The algorithm we obtain outperforms previously known methods. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
41. Continuous Time Discounted Jump Markov Decision Processes: A Discrete-Event Approach.
- Author
-
Feinberg, Eugene A.
- Subjects
MARKOV processes ,ALGORITHMS ,NONLINEAR systems ,OPERATIONS research ,SYSTEMS theory - Abstract
This paper introduces and develops a new approach to the theory of continuous time jump Markov decision processes (CTJMDP). This approach reduces discounted CTJMDPs to discounted semi-Markov decision processes (SMDPs) and eventually to discrete-time Markov decision processes (MDPs). The reduction is based on the equivalence of strategies that change actions between jumps and the randomized strategies that change actions only at jump epochs. This holds both for one-criterion problems and for multiple-objective problems with constraints. In particular, this paper introduces the theory for multiple-objective problems with expected total discounted rewards and constraints. If a problem is feasible, there exist three types of optimal policies: (i) nonrandomized switching stationary policies, (ii) randomized stationary policies for the CTJMDP, and (iii) randomized stationary policies for the corresponding SMDP with exponentially distributed sojourn times, and these policies can be implemented as randomized strategies in the CTJMDP. [ABSTRACT FROM AUTHOR]
- Published
- 2004
- Full Text
- View/download PDF
42. Tight Approximation for Unconstrained XOS Maximization.
- Author
-
Filmus, Yuval, Kawase, Yasushi, Kobayashi, Yusuke, and Yamaguchi, Yutaro
- Subjects
ADDITIVE functions ,ALGORITHMS ,SET functions ,APPROXIMATION algorithms ,COMBINATORICS - Abstract
A set function is called XOS if it can be represented by the maximum of additive functions. When such a representation is fixed, the number of additive functions required to define the XOS function is called the width. In this paper, we study the problem of maximizing XOS functions in the value oracle model. The problem is trivial for the XOS functions of width 1 because they are just additive, but it is already nontrivial even when the width is restricted to 2. We show two types of tight bounds on the polynomial-time approximability for this problem. First, in general, the approximation bound is between O(n) and Ω (n / l o g n ) , and exactly θ (n / l o g n ) if randomization is allowed, where n is the ground set size. Second, when the width of the input XOS functions is bounded by a constant k ≥ 2, the approximation bound is between k − 1 and k − 1 − ɛ for any ɛ > 0. In particular, we give a linear-time algorithm to find an exact maximizer of a given XOS function of width 2, whereas we show that any exact algorithm requires an exponential number of value oracle calls even when the width is restricted to 3. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
43. Proximal Gradient Methods with Adaptive Subspace Sampling.
- Author
-
Grishchenko, Dmitry, Iutzeler, Franck, and Malick, Jérôme
- Subjects
NONSMOOTH optimization ,ALGORITHMS ,LEARNING problems ,SIGNAL processing ,MACHINE learning - Abstract
Many applications in machine learning or signal processing involve nonsmooth optimization problems. This nonsmoothness brings a low-dimensional structure to the optimal solutions. In this paper, we propose a randomized proximal gradient method harnessing this underlying structure. We introduce two key components: (i) a random subspace proximal gradient algorithm; and (ii) an identification-based sampling of the subspaces. Their interplay brings a significant performance improvement on typical learning problems in terms of dimensions explored. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
44. A SUPERLINEARLY CONVERGENT ALGORITHM FOR THE MONOTONE NONLINEAR COMPLEMENTARITY PROBLEM WITHOUT UNIQUENESS AND NONDEGENERACY CONDITIONS.
- Author
-
Dan, Hiroshige, Yamashita, Nobuo, and Fukushima, Masao
- Subjects
ALGORITHMS ,NONLINEAR theories ,STOCHASTIC convergence ,LINEAR complementarity problem ,OPERATOR theory ,MATHEMATICAL sequences ,NUMERICAL solutions to equations ,PROBLEM solving - Abstract
The purpose of this paper is to present an algorithm for solving the monotone nonlinear complementarity problem (NCP) that enjoys superlinear convergence in a genuine sense without the uniqueness and nondegeneracy conditions. Recently, Yamashita and Fukushima (2001) proposed a method based on the proximal point algorithm (PPA) for monotone NCP. The method has the favorable property that a generated sequence converges to the solution set of NCP superlinearly. However, when a generated sequence converges to a degenerate solution, the subproblems may become computationally expensive and the method does not have genuine superlinear convergence. More recently, Yamashita et al. (2001) presented a technique to identify whether a solution is degenerate or not. Using this technique, we construct a differentiable system of nonlinear equations in which the solution is a solution of the original NCP. Moreover, we propose a hybrid algorithm that is based on the PPA and uses this system. We show that the proposed algorithm has a genuine quadratic or superlinear rate of convergence even if it converges to a solution that is neither unique nor nondegenerate. [ABSTRACT FROM AUTHOR]
- Published
- 2002
- Full Text
- View/download PDF
45. THE COMPLEXITY OF GENERIC PRIMAL ALGORITHMS FOR SOLVING GENERAL INTEGER PROGRAMS.
- Author
-
Schulz, Andreas S. and Weismantel, Robert
- Subjects
COMBINATORIAL optimization ,INTEGER programming ,ALGORITHMS ,MATHEMATICAL optimization ,POLYNOMIALS ,COMPUTATIONAL complexity ,VECTOR analysis - Abstract
Primal methods constitute a common approach to solving (combinatorial) optimization problems. Starting from a given feasible solution, they successively produce new feasible solutions with increasingly better objective function value until an optimal solution is reached. From an abstract point of view, an augmentation problem is solved in each iteration. That is, given a feasible point, these methods find an augmenting vector, if one exists. Usually, augmenting vectors with certain properties are sought to guarantee the polynomial running time of the overall algorithm. In this paper, we show that one can solve every integer programming problem in polynomial time provided one can efficiently solve the directed augmentation problem. The directed augmentation problem arises from the ordinary augmentation problem by splitting each direction into its positive and its negative part and by considering linear objectives on these parts. Our main result states that in order to get a polynomial-time algorithm for optimization it is sufficient to efficiently find, for any linear objective function in the positive and negative part, an arbitrary augmenting vector. This result also provides a general framework for the design of polynomial-time algorithms for specific combinatorial optimization problems. We demonstrate its applicability by considering the min-cost flow problem, by giving a novel algorithm for linear programming over unimodular spaces, and by providing a different proof that for 0/1-integer programming an efficient algorithm solving the ordinary augmentation problem suffices for efficient optimization. Our main result also implies that directed augmentation is at least as hard as optimization. In other words, for an NP-hard problem already the task of finding a directed augmenting vector in polynomial time is hopeless, unless P = NP. We illustrate this kind of consequence with the help of the knapsack problem. [ABSTRACT FROM AUTHOR]
- Published
- 2002
- Full Text
- View/download PDF
46. PROJECT SCHEDULING IN AND --OR GRAPHS: A GENERALIZATION OF DIJKSTRA'S ALGORITHM.
- Author
-
Adelson-Velsky, George M. and Levner, Eugene
- Subjects
SCHEDULING ,PATH analysis (Statistics) ,GRAPHIC methods ,ALGORITHMS ,MULTIVARIATE analysis - Abstract
The paper considers a project scheduling problem in weighted directed graphs in which arcs represent operations while nodes are identified with starting and finishing endpoints of the operations; arc lengths represent operation durations. The graphs have two types of nodes—AND-nodes and OR-nodes. The problem is to find the earliest starting times for all operations. This problem generalizes the shortest path problem and the critical path problem. The complexity of the suggested algorithm is O(p'p) where p' is the number of arcs entering the AND-nodes and p is the total number of arcs. The paper considers a project scheduling problem in weighted directed graphs in which arcs represent operations while nodes are identified with starting and finishing endpoints of the operations; arc lengths represent operation durations. The graphs have two types of nodes—AND-nodes and OR-nodes. The problem is to find the earliest starting times for all operations. This problem generalizes the shortest path problem and the critical path problem. The complexity of the suggested algorithm is O(p'p) where p' is the number of arcs entering the AND-nodes and p is the total number of arcs. [ABSTRACT FROM AUTHOR]
- Published
- 2002
- Full Text
- View/download PDF
47. A COMBINATORIAL,GRAPH-BASED SOLUTION METHOD FOR A CLASS OF CONTINUOUS-TIME OPTIMAL CONTROL PROBLEMS.
- Author
-
Khmelnitsky, Eugene
- Subjects
CONTROL theory (Engineering) ,ALGORITHMS ,OPERATIONS research ,MATHEMATICS ,PRODUCTION planning ,COMBINATORICS - Abstract
The paper addresses a class of continuous-time, optimal control problems whose solutions are typically characterized by both bang-bang and "singular" control regimes. Analytical study and numerical computation of such solutions are very difficult and far from complete when only techniques from control theory are used. This paper solves optimal control problems by reducing them to the combinatorial search for the shortest path in a specially constructed graph. Since the nodes of the graph are weighted in a sequence-dependent manner, we extend the classical, shortest-path algorithm to our case. The proposed solution method is currently limited to single-state problems with multiple control functions. A production planning problem and a train operation problem are optimally solved to illustrate the method. [ABSTRACT FROM AUTHOR]
- Published
- 2002
- Full Text
- View/download PDF
48. SEMISMOOTH MATRIX-VALUED FUNCTIONS.
- Author
-
Sun, Defeng and Sun, Jie
- Subjects
MATHEMATICAL programming ,MATRIX groups ,MATRIX derivatives ,NONSMOOTH optimization ,NEWTON-Raphson method ,ALGORITHMS - Abstract
Matrix-valued functions play an important role in the development of algorithms for semidefinite programming problems. This paper studies generalized differential properties of such functions related to nonsmooth-smoothing Newton methods. The first part of this paper discusses basic properties such as the generalized derivative, Rademacher's theorem, B-derivative, directional derivative, and semismoothness. The second part shows that the matrix absolute-value function, the matrix semidefinite-projection function, and the matrix projective residual function are strongly semismooth. [ABSTRACT FROM AUTHOR]
- Published
- 2002
- Full Text
- View/download PDF
49. EQUIVALENT REPRESENTATIONS OF SET FUNCTIONS.
- Author
-
Grabisch, Michel, Marichal, Jean-Luc, and Roubens, Marc
- Subjects
SET functions ,REAL variables ,SET theory ,MOBIUS transformations ,MATHEMATICAL transformations ,ALGORITHMS - Abstract
This paper introduces four alternative representations of a set function: the Mobius transformation, the co-Möbius transformation, and the interactions between elements of any subset of a given set as extensions of Shapley and Banzhaf values. The links between the five equivalent representations of a set function are emphasized in this presentation. [ABSTRACT FROM AUTHOR]
- Published
- 2000
- Full Text
- View/download PDF
50. Constructing New Weighted ℓ1-Algorithms for the Sparsest Points of Polyhedral Sets.
- Author
-
Zhao, Yun-Bin and Luo, Zhi-Quan
- Subjects
POLYHEDRAL functions ,ALGORITHMS ,MATHEMATICAL optimization ,IMAGE processing ,LINEAR algebra - Abstract
The ℓ
0 -minimization problem that seeks the sparsest point of a polyhedral set is a long-standing, challenging problem in the fields of signal and image processing, numerical linear algebra, and mathematical optimization. The weighted ℓ1 -method is one of the most plausible methods for solving this problem. In this paper we develop a new weighted ℓ1 -method through the strict complementarity theory of linear programs. More specifically, we show that locating the sparsest point of a polyhedral set can be achieved by seeking the densest possible slack variable of the dual problem of weighted ℓ1 -minimization. As a result, ℓ0 -minimization can be transformed, in theory, to ℓ0 -maximization in dual space through some weight. This theoretical result provides a basis and an incentive to develop a new weighted ℓ1 -algorithm, which is remarkably distinct from existing sparsity-seeking methods. The weight used in our algorithm is computed via a certain convex optimization instead of being determined locally at an iterate. The guaranteed performance of this algorithm is shown under some conditions, and the numerical performance of the algorithm has been demonstrated by empirical simulations. [ABSTRACT FROM AUTHOR]- Published
- 2017
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.