1,419 results
Search Results
2. Comments on a Paper by Romesh Saigal: "A Constrained Shortest Route Problem"
- Author
-
Rosseel, Marc
- Published
- 1968
3. Comments on the Paper: 'Heuristic and Special Case Algorithms for Dispersion Problems' by S. S. Ravi, D. J. Rosenkrantz, and G. K. Tayi
- Author
-
Tamir, Arie
- Published
- 1998
4. Early Integer Programming
- Author
-
Gomory, Ralph E.
- Published
- 2002
5. An Object-Oriented Random-Number Package with Many Long Streams and Substreams
- Author
-
L'Ecuyer, Pierre, Simard, Richard, Chen, E. Jack, and Kelton, W. David
- Published
- 2002
6. A Linear Programming Approach to the Cutting Stock Problem-Part II
- Author
-
Gilmore, P. C. and Gomory, R. E.
- Published
- 1963
7. Computational Methods for Oblivious Equilibrium
- Author
-
Weintraub, Gabriel Y., Benkard, C. Lanier, and Van Roy, Benjamin
- Published
- 2010
8. Bioinformatics and Management Science: Some Common Tools and Techniques
- Author
-
Abbas, Ali E. and Holmes, Susan P.
- Published
- 2004
9. The Greedy Procedure for Resource Allocation Problems: Necessary and Sufficient Conditions for Optimality
- Author
-
Federgruen, Awi and Groenevelt, Henri
- Published
- 1986
10. Algorithms for the Vehicle Routing and Scheduling Problems with Time Window Constraints
- Author
-
Solomon, Marius M.
- Published
- 1987
11. A Generalized Bounding Method for Multifacility Location Models
- Author
-
Love, Robert F. and Dowling, Paul D.
- Published
- 1989
12. George B. Dantzig: Operations Research Icon
- Author
-
Cottle, Richard W.
- Published
- 2005
- Full Text
- View/download PDF
13. A Family of Search Directions for Karmarkar's Algorithm
- Author
-
Anstreicher, Kurt M. and Watteyne, Patrick
- Published
- 1993
14. Sequencing with Earliness and Tardiness Penalties: A Review
- Author
-
Baker, Kenneth R. and Scudder, Gary D.
- Published
- 1990
15. Best Principal Submatrix Selection for the Maximum Entropy Sampling Problem: Scalable Algorithms and Performance Guarantees.
- Author
-
Li, Yongchun and Xie, Weijun
- Subjects
ARTIFICIAL intelligence ,APPROXIMATION algorithms ,ENTROPY ,SEARCH algorithms ,ALGORITHMS ,SURETYSHIP & guaranty - Abstract
This paper studies a classic maximum entropy sampling problem (MESP), which aims to select the most informative principal submatrix of a prespecified size from a covariance matrix. MESP is widely applied to many areas, including healthcare, power systems, manufacturing, and data science. By investigating its Lagrangian dual and primal characterization, we derive a novel convex integer program for MESP and show that its continuous relaxation yields a near-optimal solution. The results motivate us to study efficient approximation algorithms and develop their approximation bounds for MESP, which improves the best known one in the literature. This paper studies a classic maximum entropy sampling problem (MESP), which aims to select the most informative principal submatrix of a prespecified size from a covariance matrix. By investigating its Lagrangian dual and primal characterization, we derive a novel convex integer program for MESP and show that its continuous relaxation yields a near-optimal solution. The results motivate us to develop a sampling algorithm and derive its approximation bound for MESP, which improves the best known bound in literature. We then provide an efficient deterministic implementation of the sampling algorithm with the same approximation bound. Besides, we investigate the widely used local search algorithm and prove its first known approximation bound for MESP. The proof techniques further inspire for us an efficient implementation of the local search algorithm. Our numerical experiments demonstrate that these approximation algorithms can efficiently solve medium-size and large-scale instances to near optimality. Finally, we extend the analyses to the A-optimal MESP, for which the objective is to minimize the trace of the inverse of the selected principal submatrix. Funding: This work was supported by the National Science Foundation Division of Information and Intelligent Systems [Grant 2246417] and Division of Civil, Mechanical and Manufacturing Innovation [Grant 2246414]. Supplemental Material: The e-companion is available at https://doi.org/10.1287/opre.2023.2488. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
16. Recursive Importance Sketching for Rank Constrained Least Squares: Algorithms and High-Order Convergence.
- Author
-
Luo, Yuetian, Huang, Wen, Li, Xudong, and Zhang, Anru
- Subjects
LEAST squares ,GAUSS-Newton method ,LOW-rank matrices ,MACHINE learning ,ALGORITHMS - Abstract
Solving Rank Constrained Least Squares via Recursive Importance Sketching In statistics and machine learning, we sometimes run into the rank-constrained least squares problems, for which we need to find the best low-rank fit between sets of data, such as trying to figure out what factors are affecting the data, filling in missing information, or finding connections between different sets of data. This paper introduces a new method for solving this problem called the recursive importance sketching algorithm (RISRO), in which the central idea is to break the problem down into smaller, easier parts using a unique technique called "recursive importance sketching." This new method is not only easy to use, but it is also very efficient and gives accurate results. We prove that RISRO converges in a local quadratic-linear and quadratic rate under some mild conditions. Simulation studies also demonstrate the superior performance of RISRO. In this paper, we propose a recursive importance sketching algorithm for rank-constrained least squares optimization (RISRO). The key step of RISRO is recursive importance sketching, a new sketching framework based on deterministically designed recursive projections, and it significantly differs from the randomized sketching in the literature. Several existing algorithms in the literature can be reinterpreted under this new sketching framework, and RISRO offers clear advantages over them. RISRO is easy to implement and computationally efficient, and the core procedure in each iteration is to solve a dimension-reduced least squares problem. We establish the local quadratic-linear and quadratic rate of convergence for RISRO under some mild conditions. We also discover a deep connection of RISRO to the Riemannian Gauss–Newton algorithm on fixed rank matrices. The effectiveness of RISRO is demonstrated in two applications in machine learning and statistics: low-rank matrix trace regression and phase retrieval. Simulation studies demonstrate the superior numerical performance of RISRO. Funding: Y. Luo and A. Zhang were partially supported by the National Science Foundation [Grant CAREER-2203741]. W. Huang was partially supported by the Fundamental Research Funds for the Central Universities [Grant 20720190060] and the National Natural Science Foundation of China [Grant 12001455]. X. Li was partially supported by the National Natural Science Foundation of China [Grants 62141407 and 12271107], the Chenguang Program by the Shanghai Education Development Foundation and Shanghai Municipal Education Commission [Grant 19CG02], and the Shanghai Science and Technology Program [Grant 21JC1400600]. Supplemental Material: The online appendix is available at https://doi.org/10.1287/opre.2023.2445. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
17. Validating Claims for Algorithms Proposed for Publication
- Author
-
Ignizio, James P.
- Published
- 1973
18. Comments on the paper: `Heuristic and Special Case Algorithms for Dispersion Problems' by S. S...
- Author
-
Tamir, Are
- Subjects
HEURISTIC ,MATHEMATICAL optimization ,ALGORITHMS ,MATHEMATICAL statistics ,MATHEMATICS ,STATISTICS ,PROBABILITY theory - Abstract
This article presents comments by the author on a paper discussing heuristic and special case algorithms for dispersion problems. Problems discussed in this article are the subject of the paper in focus. The results presented in that paper include a simple heuristic for Max-Min Facility Dispersion (MMFD) which provides a performance guarantee of 1/2, and a similar heuristic for Max-Avg Facility Dispersion (MAFD) with a performance guarantee of 1/4. It is also proved there that obtaining a performance guarantee of more than 1/2 for MMFD is NP-hard. The paper also discussed the one- dimensional versions of MMFD and MAFD, where the vertex set V consists of a set of n points on the real line. Section 2 of the paper is devoted to the heuristic for MMPD. It should be noted that this heuristic was analyzed before. In fact, it is shown therein that the performance guarantee of 1/2, holds for a more general model where the selected set p is restricted to be in a compact subset of the network indiced by the edge distances.
- Published
- 1998
- Full Text
- View/download PDF
19. A Nonparametric Algorithm for Optimal Stopping Based on Robust Optimization.
- Author
-
Sturt, Bradley
- Subjects
ROBUST optimization ,GRAPH theory ,POLYNOMIAL time algorithms ,ALGORITHMS ,DISTRIBUTION (Probability theory) - Abstract
Optimal stopping is a fundamental class of stochastic dynamic optimization problems with numerous applications in finance and operations management. In "A nonparametric algorithm for optimal stopping based on robust optimization," the author introduces an approach based on robust optimization for computing near-optimal solutions for computationally demanding stochastic optimal stopping problems with known probability distributions. Through this new use of robust optimization, the paper develops new algorithms for solving stochastic optimal stopping problems that are practical and can outperform state-of-the-art simulation-based algorithms in the context of pricing high-dimensional financial options. Optimal stopping is a fundamental class of stochastic dynamic optimization problems with numerous applications in finance and operations management. We introduce a new approach for solving computationally-demanding stochastic optimal stopping problems with known probability distributions. The approach uses simulation to construct a robust optimization problem that approximates the stochastic optimal stopping problem to any arbitrary accuracy; we then solve the robust optimization problem to obtain near-optimal Markovian stopping rules for the stochastic optimal stopping problem. In this paper, we focus on designing algorithms for solving the robust optimization problems that approximate the stochastic optimal stopping problems. These robust optimization problems are challenging to solve because they require optimizing over the infinite-dimensional space of all Markovian stopping rules. We overcome this challenge by characterizing the structure of optimal Markovian stopping rules for the robust optimization problems. In particular, we show that optimal Markovian stopping rules for the robust optimization problems have a structure that is surprisingly simple and finite-dimensional. We leverage this structure to develop an exact reformulation of the robust optimization problem as a zero-one bilinear program over totally unimodular constraints. We show that the bilinear program can be solved in polynomial time in special cases, establish computational complexity results for general cases, and develop polynomial-time heuristics by relating the bilinear program to the maximal closure problem from graph theory. Numerical experiments demonstrate that our algorithms for solving the robust optimization problems are practical and can outperform state-of-the-art simulation-based algorithms in the context of widely-studied stochastic optimal stopping problems from high-dimensional option pricing. Supplemental Material: The e-companion is available at https://doi.org/10.1287/opre.2023.2461. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
20. A Robust Spectral Clustering Algorithm for Sub-Gaussian Mixture Models with Outliers.
- Author
-
Srivastava, Prateek R., Sarkar, Purnamrita, and Hanasusanto, Grani A.
- Subjects
GAUSSIAN mixture models ,DISTRIBUTION (Probability theory) ,K-means clustering ,ALGORITHMS ,SIGNAL-to-noise ratio - Abstract
Traditional clustering algorithms such as k-means and vanilla spectral clustering are known to deteriorate significantly in the presence of outliers. Several previous works in literature have proposed robust variants of these algorithms; however, they do not provide any theoretical guarantees. Extending previous clustering literature on Gaussian mixture models, in their paper "A Robust Spectral Clustering Algorithm for Sub-Gaussian Mixture Models with Outliers," Prateek R. Srivastava, Purnamrita Sarkar, and Grani A. Hanasusanto developed a new spectral clustering algorithm and provided error bounds for the algorithm under a general sub-Gaussian mixture model setting with outliers. Surprisingly, their derived error bound matches with the best-known bound for semidefinite programs under the same setting without outliers. Numerical experiments on a variety of simulated and real-world data sets further demonstrate that their algorithm is less sensitive to outliers compared with other state-of-the-art algorithms. We consider the problem of clustering data sets in the presence of arbitrary outliers. Traditional clustering algorithms such as k-means and spectral clustering are known to perform poorly for data sets contaminated with even a small number of outliers. In this paper, we develop a provably robust spectral clustering algorithm that applies a simple rounding scheme to denoise a Gaussian kernel matrix built from the data points and uses vanilla spectral clustering to recover the cluster labels of data points. We analyze the performance of our algorithm under the assumption that the "good" data points are generated from a mixture of sub-Gaussians (we term these "inliers"), whereas the outlier points can come from any arbitrary probability distribution. For this general class of models, we show that the misclassification error decays at an exponential rate in the signal-to-noise ratio, provided the number of outliers is a small fraction of the inlier points. Surprisingly, this derived error bound matches with the best-known bound for semidefinite programs (SDPs) under the same setting without outliers. We conduct extensive experiments on a variety of simulated and real-world data sets to demonstrate that our algorithm is less sensitive to outliers compared with other state-of-the-art algorithms proposed in the literature. Funding: G. A. Hanasusanto was supported by the National Science Foundation Grants NSF ECCS-1752125 and NSF CCF-2153606. P. Sarkar gratefully acknowledges support from the National Science Foundation Grants NSF DMS-1713082, NSF HDR-1934932 and NSF 2019844. Supplemental Material: The online appendix is available at https://doi.org/10.1287/opre.2022.2317. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
21. A Decomposition Algorithm with Fast Identification of Critical Contingencies for Large-Scale Security-Constrained AC-OPF.
- Author
-
Curtis, Frank E., Molzahn, Daniel K., Tu, Shenyinying, Wächter, Andreas, Wei, Ermin, and Wong, Elizabeth
- Subjects
ELECTRICAL load ,ALGORITHMS ,OPERATIONS research ,DECOMPOSITION method - Abstract
The article "A Decomposition Algorithm with Fast Identification of Critical Contingencies for Large-Scale Security-Constrained AC-OPF" presents the decomposition algorithm used by Team GO-SNIP for the ARPA-E Grid Optimization (GO) Competition Challenge 1, held from November 2018 through October 2019. The algorithm involves unique contingency ranking and evaluation strategies for determining the important contingencies to include in a master problem that approximates the original large-scale security-constrained problem. It also involves efficient strategies for handling the complementarity constraints that appear in the model and for handling the arising degeneracies. Software implementation details are described, and the results of an extensive set of numerical experiments are provided to illustrate the effectiveness of each of the used techniques. Team GO-SNIP received a second-place finish in Challenge 1 of the Go Competition. A decomposition algorithm for solving large-scale security-constrained AC optimal power flow problems is presented. The formulation considered is the one used in the Advanced Research Projects Agency-Energy Grid Optimization Competition, Challenge 1, held from November 2018 through October 2019. Algorithmic strategies are proposed for contingency selection, fast contingency evaluation, handling complementarity constraints, avoiding issues related to degeneracy, and exploiting parallelism. The results of numerical experiments are provided to demonstrate the effectiveness of the proposed techniques as compared with alternative strategies. History: This paper has been accepted for the Operations Research Special Issue on Computational Advances in Short-Term Power System Operations. Funding: This work was supported by the Advanced Research Projects Agency-Energy [Grant DE-AR0001073]. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
22. Robust Dual Dynamic Programming.
- Author
-
Georghiou, Angelos, Tsoukalas, Angelos, and Wiesemann, Wolfram
- Subjects
DYNAMIC programming ,ROBUST optimization ,INVENTORY control ,ALGORITHMS - Abstract
In the paper "Robust Dual Dynamic Programming," Angelos Georghiou, Angelos Tsoukalas, and Wolfram Wiesemann propose a novel solution scheme for addressing planning problems with long horizons. Such problems can be formulated as multistage robust optimization problems. The proposed method takes advantage of the decomposable nature of these problems by bounding the costs arising in the future stages through lower and upper cost-to-go functions. The proposed scheme does not require a relatively complete recourse, and it offers deterministic upper and lower bounds throughout the execution of the algorithm. The promising performance of the algorithm is shown in a stylized inventory-management problem in which the proposed algorithm achieved the optimal solution in problem instances with 100 time stages in a few minutes. Multistage robust optimization problems, where the decision maker can dynamically react to consecutively observed realizations of the uncertain problem parameters, pose formidable theoretical and computational challenges. As a result, the existing solution approaches for this problem class typically determine suboptimal solutions under restrictive assumptions. In this paper, we propose a robust dual dynamic programming (RDDP) scheme for multistage robust optimization problems. The RDDP scheme takes advantage of the decomposable nature of these problems by bounding the costs arising in the future stages through lower and upper cost-to-go functions. For problems with uncertain technology matrices and/or constraint right-hand sides, our RDDP scheme determines an optimal solution in finite time. Also, if the objective function and/or the recourse matrices are uncertain, our method converges asymptotically (but deterministically) to an optimal solution. Our RDDP scheme does not require a relatively complete recourse, and it offers deterministic upper and lower bounds throughout the execution of the algorithm. We show the promising performance of our algorithm in a stylized inventory management problem. Supplemental material is available at https://doi.org/10.1287/opre.2018.1835. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
23. A Dynamic Principal-Agent Model with Hidden Information: Sequential Optimality Through Truthful State Revelation.
- Author
-
Hao Zhang and Zenios, Stefanos
- Subjects
AGENCY (Law) ,DECISION making ,MARKOV processes ,CONTRACTS ,UTILITY theory ,ALGORITHMS - Abstract
This paper proposes a general framework for a large class of multiperiod principal-agent problems. In this framework, a principal has a primary stake in the performance of a system but delegates its control to an agent. The underlying system is a Markov decision process, where the state of the system can only be observed by the agent but the agent's action is observed by both parties. This paper develops a dynamic programming algorithm to derive optimal long-term contracts for the principal. The principal indirectly controls the underlying system by offering the agent a menu of continuation utility vectors along public information paths; the agent's best response, expressed in his choice of continuation utilities, induces truthful state revelation and results in actions that maximize the principal's expected payoff. This problem is meaningful to the operations research community because it can be framed as the problem of optimally designing the reward structure of a Markov decision process with hidden states and has many applications of interest as discussed in this paper [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
24. COMMENTS ON A PAPER BY M. L. WOLFSON: 'SELECTING THE BEST LENGTHS TO STOCK'
- Author
-
Cohen, Gerald D.
- Subjects
STOCK prices ,STOCKS (Finance) ,DYNAMIC programming ,NONLINEAR programming ,INTEGER programming ,LINEAR programming ,ALGORITHMS - Abstract
This article presents comments on a paper by M.L. Wolfson related to selecting the best lengths to stock. The problem considered in the paper by Wolfson is a classic specimen of the type ideally structured for direct and practical solution by dynamic programming. The author's algorithm, given in the paper, is in the spirit of this method, but its understanding could be greatly simplified if put into the recursive formalism of dynamic programming.
- Published
- 1966
- Full Text
- View/download PDF
25. Algorithms and Complexities of Matching Variants in Covariate Balancing.
- Author
-
Hochbaum, Dorit S., Levin, Asaf, and Rao, Xu
- Subjects
POLYNOMIAL time algorithms ,ALGORITHMS ,NP-hard problems - Abstract
In an observational study there are two disjointed groups of samples, one of treatment samples and the other of control samples. Each of the samples is characterized by several observed covariates. Covariate balancing problems arise when estimating causal effects using observational data. It is desirable to replicate a randomized experiment by obtaining treatment and control groups with similar covariate distributions. Even though covariate balancing problems have been studied extensively, the complexity status of many variants has not been established. Some of our results demonstrate that 2-covariate balancing problems are polynomial time solvable, whereas almost all problems are hard for three or more covariates. These results have practical implications, such as justifying the use of implicit enumeration techniques or heuristics for the hard cases. A new approach suggested by our results is to use the 2-covariate polynomial cases and relax the problem by aggregating covariates into two sets to be solved efficiently. (accepted paper title: Algorithms and complexities of matching variants in covariate balancing) Here, we study several variants of matching problems that arise in covariate balancing. Covariate balancing problems can be viewed as variants of matching, or b-matching, with global side constraints. We present here a comprehensive complexity study of the covariate balancing problems providing polynomial time algorithms, or a proof of NP-hardness. The polynomial time algorithms described are mostly combinatorial and rely on network flow techniques. In addition, we present several fixed-parameter tractable results for problems where the number of covariates and the number of levels of each covariate are seen as a parameter. Funding: This work was supported by National Science Foundation [Grants CMMI-1760102 and NSF 2112533]; Israel Science Foundation (308/18). Supplemental Material: The e-companion is available at https://doi.org/10.1287/opre.2022.2286. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
26. A Linear Fractional Max-Min Problem.
- Author
-
Cook, W. D., Kirby, M. J. L., and Mehndiratta, S. L.
- Subjects
FRACTIONAL integrals ,LINEAR programming ,MATHEMATICAL programming ,MAXIMA & minima ,ALGORITHMS ,EXTREMAL problems (Mathematics) ,ALGEBRA ,MATHEMATICAL optimization - Abstract
This paper is concerned with a linear fractional problem of the form: max[sub x] min[sub y] F(X, Y) = (cX + dY + α)/(fX + gY + β), subject to AX + BY less than or equal to b; X, Y greater than or equal to O. This problem represents a generalization of a problem considered in the literature in which F(X, Y) is assumed to be linear. A number of results for the linear case are extended; and, in particular, it is shown that this fractional max-min problem is equivalent to a quasi-convex programming problem whose optimal solution lies at a vertex of the feasible region. Using these results, we develop an algorithm for solving this problem. The paper concludes with a numerical example. [ABSTRACT FROM AUTHOR]
- Published
- 1975
- Full Text
- View/download PDF
27. On the Set-Covering Problem: II. An Algorithm for Set Partitioning.
- Author
-
Balas, Egon and Padberg, Manfred
- Subjects
INDUSTRIAL engineering ,LINEAR programming ,OPERATIONS research ,ALGORITHMS ,EQUATIONS ,SYSTEMS theory - Abstract
In an earlier paper [Opns. Res. 20, 1153-1161 (1972)] we proved that any feasible integer solution to the linear program associated with the equality-constrained set-covering problem can be obtained from any other feasible integer solution by a sequence of less than m pivots (where m is the number of equations), such that each solution generated in the sequence is integer. However, degeneracy makes it difficult to find a sequence of pivots leading to an integer optimum. In this paper we give a constructive characterization of adjacency relations between integer vertices of the feasible set that enables us to generate edges (all, if necessary) connecting a given integer vertex to adjacent integer vertices. This helps overcome the difficulties caused by degeneracy and leads to a class of algorithms, of which we discuss two. [ABSTRACT FROM AUTHOR]
- Published
- 1975
- Full Text
- View/download PDF
28. Asymptotic Linear Programming.
- Author
-
Jeroslow, Robert G.
- Subjects
LINEAR programming ,ALGORITHMS ,MATHEMATICAL functions ,PRODUCTION scheduling ,MATHEMATICAL programming ,MATHEMATICAL models - Abstract
This paper studies the linear programming problem in which all coefficients (even those of the stipulations matrix) are rational functions of a single parameter t called ‘time,’ and provides an algorithm that can serve problems of the following two types: (1) Steady-state behavior [the algorithm can be used to determine the functional form x(t) of the optimal solution as a function of t, this form being valid for all ‘sufficiently large’ values of t], and (2) sensitivity analysis [if a value t
0 of ‘time’ is given, the algorithm can be used to determine the two possible functional forms of the optimal solution for all values of t ‘sufficiently dose’ to t0 (the first functional form valid for t«t0 , the second for t»t0 )]. In addition, the paper gives certain qualitative information regarding steady-state behavior, including the following result: If for some one of the properties of consistency, boundedness, or bounded constraint set, there exists a sequence tn ↗+∞ such that the linear program at tn has this property for all n, then the program has this property for all ‘sufficiently large’ values of t. [ABSTRACT FROM AUTHOR]- Published
- 1973
- Full Text
- View/download PDF
29. Letters to the Editor.
- Author
-
Ignizio, James P.
- Subjects
LETTERS to the editor ,ALGORITHMS - Abstract
This letter calls attention to the problems of validating the claim made for algorithms in published papers, and proposes a system for review that will validate such claims thoroughly before publication. [ABSTRACT FROM AUTHOR]
- Published
- 1973
- Full Text
- View/download PDF
30. Some Properties of Generalized Concave Functions.
- Author
-
Thompson Jr., W. A. and Parke, Darrel W.
- Subjects
CONCAVE functions ,REAL variables ,MATHEMATICAL programming ,COMPLEX variables ,ALGORITHMS ,FUNCTIONAL equations ,MATHEMATICAL optimization ,OPERATIONS research - Abstract
This paper examines properties and interrelations of several concepts of generalized concavity. It shows that the class of functions that are both ‘generalized concave’ and ‘generalized convex’ is closely related to the class of monotone functions of a single variable. After excluding a certain small class of exceptions, the paper shows that, for arbitrary (perhaps not differentiable) functions, concave implies pseudoconcave, pseudoconcave implies strictly quasiconcave, and strictly quasiconcave implies quasiconcave. Several results characterizing the extreme values of generalized concave functions are given. [ABSTRACT FROM AUTHOR]
- Published
- 1973
- Full Text
- View/download PDF
31. CHANCE CONSTRAINED PROGRAMMING WITH JOINT CONSTRAINTS.
- Author
-
Miller, Bruce L. and Wagner, Harvey M.
- Subjects
LINEAR programming ,DYNAMIC programming ,MATHEMATICAL programming ,LOGARITHMIC functions ,NONLINEAR programming ,ALGORITHMS ,OPERATIONS research - Abstract
This paper considers the mathematical properties of chance constrained programming problems where the restriction is on the joint probability of a multivariate random event. One model that is considered arises when the right-hand side constants of the linear constraints are random. Another model treated here occurs when the coefficients of the linear programming variables are described by a multinormal distribution. It is shown that under certain restrictions both situations can be viewed as a deterministic nonlinear programming problem. Since most computational methods for solving nonlinear programming models require the constraints be concave, this paper explores whether the resultant problem meets the concavity assumption. For many probability laws of practical importance, the constraint in the first type of model is shown to violate concavity. However, a simple logarithmic transformation does produce a concave restriction for an important class of problems. The paper also surveys the `generalized linear programming' method for solving such problems when the logarithmic transformation is justified. For the second type model, the constraint is demonstrated to be nonconcave. [ABSTRACT FROM AUTHOR]
- Published
- 1965
- Full Text
- View/download PDF
32. DISCOUNTED MARKOV PROGRAMMING IN A PERIODIC PROCESS.
- Author
-
Riis, Jens Ove
- Subjects
MARKOV processes ,PROBABILITY theory ,ALGORITHMS ,MATHEMATICAL programming ,ITERATIVE methods (Mathematics) - Abstract
This paper deals with a nonstationary discrete-time Markov process whose transition probabilities vary periodically in time. Each transition results in a reward that varies within the same cycle as the transition matrix. For infinite processes a policy-iteration algorithm is developed that effectively determines an optimal policy maximizing the total discounted reward. The paper represents an extension of R. A. HOWARD's policy-iteration technique for stationary Markov processes. A numerical example is given in which the developed iteration algorithm is demonstrated. [ABSTRACT FROM AUTHOR]
- Published
- 1965
- Full Text
- View/download PDF
33. THE MULTI-INDEX PROBLEM.
- Author
-
Haley, K. B.
- Subjects
TRANSPORTATION problems (Programming) ,TRANSPORTATION ,ALGORITHMS ,PROBLEM solving ,LINEAR programming ,MATHEMATICAL inequalities ,SIMPLEXES (Mathematics) ,OPERATIONS research ,EQUATIONS - Abstract
The multi-index transportation problem is defined and a summary of the algorithm described in an earlier paper is given. This paper discusses the theorems justifying the use of the algorithm and gives a necessary condition for the existence of a solution. It also describes the formulation of three special-structure linear-programming problems as examples of multi-index problems. The three problems discussed are the capacitated Hitchcock problem, transportation where there is more than one method of transport available, and the transformation of one type of multi-index problem into another. A final section describes problems that have restrictions in the form of inequalities. [ABSTRACT FROM AUTHOR]
- Published
- 1963
- Full Text
- View/download PDF
34. Fast Algorithms for Rank-1 Bimatrix Games.
- Author
-
Adsul, Bharat, Garg, Jugal, Mehta, Ruta, Sohoni, Milind, and von Stengel, Bernhard
- Subjects
NASH equilibrium ,POLYNOMIAL time algorithms ,DEALERS (Retail trade) ,COMPUTATIONAL complexity ,GAMES ,ALGORITHMS - Abstract
Games with multiplicative payoffs Trade is win-win: The buyer gets a product that the seller can provide more easily. Selling more and buying more benefits both. This mutual benefit can sometimes be represented multiplicatively. But the buyer also pays a price to the seller. This money transfer is win-lose, a zero-sum game. The sum of the two payoff matrices is a matrix of rank one. In "Fast Algorithms for Rank-1 Bimatrix Games," Bharat Adsul, Jugal Garg, Ruta Mehta, Milind Sohoni, and Bernhard von Stengel show new algorithms that analyze such rank-1 games. The computation time for finding one Nash equilibrium is polynomial. Finding all Nash equilibria takes time comparable to finding a single equilibrium of a general bimatrix game. This puts games in computational reach that are economically much more interesting than zero-sum games (which are of rank zero), whereas solving general games is computationally hard. The paper's detailed structural analysis of games based on their rank enhances our understanding of this computational complexity. The rank of a bimatrix game is the matrix rank of the sum of the two payoff matrices. This paper comprehensively analyzes games of rank one and shows the following: (1) For a game of rank r, the set of its Nash equilibria is the intersection of a generically one-dimensional set of equilibria of parameterized games of rank r − 1 with a hyperplane. (2) One equilibrium of a rank-1 game can be found in polynomial time. (3) All equilibria of a rank-1 game can be found by following a piecewise linear path. In contrast, such a path-following method finds only one equilibrium of a bimatrix game. (4) The number of equilibria of a rank-1 game may be exponential. (5) There is a homeomorphism between the space of bimatrix games and their equilibrium correspondence that preserves rank. It is a variation of the homeomorphism used for the concept of strategic stability of an equilibrium component. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
35. On the Consistent Path Problem.
- Author
-
Lozano, Leonardo, Bergman, David, and Smith, J. Cole
- Subjects
ALGORITHMS ,INTEGER programming ,COMBINATORIAL optimization - Abstract
This paper studies a novel decomposition scheme, utilizing decision diagrams for modeling elements of a problem where typical linear relaxations fail to provide sufficiently tight bounds. Given a collection of decision diagrams, each representing a portion of the problem, together with linear inequalities modeling other portions of the problem, how can one efficiently optimize over such a representation? In this paper, we model the problem as a consistent path problem, where a path in each diagram has to be identified, all of which agree on the value assignments to variables. We establish complexity results and propose a branch-and-cut framework for solving the decomposition. Through application to binary cubic optimization and a variant of the market split problem, we show that the decomposition approach provides significant improvement gains over standard linear models. The application of decision diagrams in combinatorial optimization has proliferated in the last decade. In recent years, authors have begun to investigate how to use not one, but a set of diagrams, to model constraints and objective function terms. Optimizing over a collection of decision diagrams, the problem we refer to as the consistent path problem (CPP) can be addressed by associating a network-flow model with each decision diagram, jointly linked through channeling constraints. A direct application of integer programming to the ensuing model has already been shown to result in algorithms that provide orders-of-magnitude performance gains over classical methods. Lacking, however, is a careful study of dedicated solution methods designed to solve the CPP. This paper provides a detailed study of the CPP, including a discussion on complexity results and a complete polyhedral analysis. We propose a cut-generation algorithm, which, under a structured ordering property, finds a cut, if one exists, through an application of the classical maximum flow problem, albeit in an exponentially sized network. We use this procedure to fuel a cutting-plane algorithm that is applied to unconstrained binary cubic optimization and a variant of the market split problem, resulting in an algorithm that compares favorably with CPLEX, using standard integer programming formulations for both problems. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
36. Why Is Maximum Clique Often Easy in Practice?
- Author
-
Walteros, Jose L. and Buchanan, Austin
- Subjects
ALGORITHMS ,POLYNOMIAL time algorithms - Abstract
Jose L. Walteros and Austin Buchanan This paper focuses on providing a rigorous, worst-case explanation for why the maximum clique problem is apparently so easy in naturally occurring graphs, despite its intractability in the worst case. To this day, there exist unsolved benchmark instances with just 1,000 vertices. In contrast, real-life instances appear to be much easier. Indeed, relatively simple algorithms can solve instances with millions of vertices in just a few seconds. However, these algorithms lack any worst-case guarantees on their running time. Exactly why these real-life instances are so easy has been somewhat of a puzzle. This paper shows that maximum clique can be solved in time polynomial in the size of the graph, but exponential in a graph invariant that we denote by. Typically, is very small for real-life graphs, being 0, 1 or 2 in over half of the instances coming from commonly used testbeds. In these and other cases where can be thought of as a small constant, our algorithm runs in time. To this day, the maximum clique problem remains a computationally challenging problem. Indeed, despite researchers' best efforts, there exist unsolved benchmark instances with 1,000 vertices. However, relatively simple algorithms solve real-life instances with millions of vertices in a few seconds. Why is this the case? Why is the problem apparently so easy in many naturally occurring networks? In this paper, we provide an explanation. First, we observe that the graph's clique number ω is very near to the graph's degeneracy d in most real-life instances. This observation motivates a main contribution of this paper, which is an algorithm for the maximum clique problem that runs in time polynomial in the size of the graph, but exponential in the gap g ≔ (d + 1) − ω between the clique number ω and its degeneracy-based upper bound d+1. When this gap g can be treated as a constant, as is often the case for real-life graphs, the proposed algorithm runs in time O (d m) = O (m 1.5 ) . This provides a rigorous explanation for the apparent easiness of these instances despite the intractability of the problem in the worst case. Further, our implementation of the proposed algorithm is actually practical—competitive with the best approaches from the literature. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
37. Optimistic Monte Carlo Tree Search with Sampled Information Relaxation Dual Bounds.
- Author
-
Jiang, Daniel R., Al-Kanj, Lina, and Powell, Warren B.
- Subjects
MONTE Carlo method ,DECISION trees ,STATISTICAL decision making ,ARTIFICIAL intelligence ,ALGORITHMS ,DECISION making - Abstract
In the paper, "Optimistic Monte Carlo Tree Search with Sampled Information Relaxation Dual Bounds," the authors propose an extension to Monte Carlo tree search that uses the idea of "sampling the future" to produce noisy upper bounds on nodes in the decision tree. These upper bounds can help guide the tree expansion process and produce decision trees that are deeper rather than wider, in effect concentrating computation toward more useful parts of the state space. The algorithm's effectiveness is illustrated in a ride-sharing setting, where a driver/vehicle needs to make dynamic decisions regarding trip acceptance and relocations. Monte Carlo tree search (MCTS), most famously used in game-play artificial intelligence (e.g., the game of Go), is a well-known strategy for constructing approximate solutions to sequential decision problems. Its primary innovation is the use of a heuristic, known as a default policy, to obtain Monte Carlo estimates of downstream values for states in a decision tree. This information is used to iteratively expand the tree toward regions of states and actions that an optimal policy might visit. However, to guarantee convergence to the optimal action, MCTS requires the entire tree to be expanded asymptotically. In this paper, we propose a new "optimistic" tree search technique called primal-dual MCTS that uses sampled information relaxation upper bounds on potential actions to make tree expansion decisions, creating the possibility of ignoring parts of the tree that stem from highly suboptimal choices. The core contribution of this paper is to prove that despite converging to a partial decision tree in the limit, the recommended action from primal-dual MCTS is optimal. The new approach shows promise when used to optimize the behavior of a single driver navigating a graph while operating on a ride-sharing platform. Numerical experiments on a real data set of taxi trips in New Jersey suggest that primal-dual MCTS improves on standard MCTS (upper confidence trees) and other policies while exhibiting a reduced sensitivity to the size of the action space. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
38. Simple Bayesian Algorithms for Best-Arm Identification.
- Author
-
Russo, Daniel
- Subjects
ALGORITHMS ,BAYESIAN analysis ,MULTI-armed bandit problem (Probability theory) ,OPERATIONS research - Abstract
This paper considers the optimal adaptive allocation of measurement effort for identifying the best among a finite set of options or designs. An experimenter sequentially chooses designs to measure and observes noisy signals of their quality with the goal of confidently identifying the best design after a small number of measurements. Just as the multiarmed bandit problem crystallizes the tradeoff between exploration and exploitation, this "pure exploration" variant crystallizes the challenge of rapidly gathering information before committing to a final decision. The paper proposes several simple Bayesian algorithms for allocating measurement effort and, by characterizing fundamental asymptotic limits on the performance of any algorithm, formalizes a sense in which these seemingly naive algorithms are the best possible. This paper considers the optimal adaptive allocation of measurement effort for identifying the best among a finite set of options or designs. An experimenter sequentially chooses designs to measure and observes noisy signals of their quality with the goal of confidently identifying the best design after a small number of measurements. This paper proposes three simple and intuitive Bayesian algorithms for adaptively allocating measurement effort and formalizes a sense in which these seemingly naive rules are the best possible. One proposal is top-two probability sampling, which computes the two designs with the highest posterior probability of being optimal and then randomizes to select among these two. One is a variant of top-two sampling that considers not only the probability that a design is optimal, but the expected amount by which its quality exceeds that of other designs. The final algorithm is a modified version of Thompson sampling that is tailored for identifying the best design. We prove that these simple algorithms satisfy a sharp optimality property. In a frequentist setting where the true quality of the designs is fixed, one hopes that the posterior definitively identifies the optimal design, in the sense that that the posterior probability assigned to the event that some other design is optimal converges to zero as measurements are collected. We show that under the proposed algorithms, this convergence occurs at an exponential rate, and the corresponding exponent is the best possible among all allocation rules. It should be highlighted that the proposed algorithms depend on a single tuning parameter, which determines the probability used when randomizing among the top-two designs. Attaining the optimal rate of posterior convergence requires either that this parameter is set optimally or is tuned adaptively toward the optimal value. The paper goes further, characterizing the exponent attained on any problem instance and for any value of the tunable parameter. This exponent is interpreted as being optimal among a constrained class of allocation rules. Finally, considerable robustness to this parameter is established through numerical experiments and theoretical results. When this parameter is set to 1/2, the exponent attained is within a factor of 2 of best possible across all problem instances. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
39. Rank Centrality: Ranking from Pairwise Comparisons.
- Author
-
Negahban, Sahand, Oh, Sewoong, and Shah, Devavrat
- Subjects
MULTINOMIAL distribution ,AGGREGATION (Statistics) ,ALGORITHMS ,DISTRIBUTION (Probability theory) ,MATHEMATICS - Abstract
The question of aggregating pairwise comparisons to obtain a global ranking over a collection of objects has been of interest for a very long time: be it ranking of online gamers (e.g., MSR's TrueSkill system) and chess players, aggregating social opinions, or deciding which product to sell based on transactions. In most settings, in addition to obtaining a ranking, finding 'scores' for each object (e.g., player's rating) is of interest for understanding the intensity of the preferences. In this paper, we propose Rank Centrality, an iterative rank aggregation algorithm for discovering scores for objects (or items) from pairwise comparisons. The algorithm has a natural random walk interpretation over the graph of objects with an edge present between a pair of objects if they are compared; the score, which we call Rank Centrality, of an object turns out to be its stationary probability under this random walk. To study the efficacy of the algorithm, we consider the popular Bradley-Terry-Luce (BTL) model (equivalent to the Multinomial Logit (MNL) for pairwise comparisons) in which each object has an associated score that determines the probabilistic outcomes of pairwise comparisons between objects. In terms of the pairwise marginal probabilities, which is the main subject of this paper, the MNL model and the BTL model are identical. We bound the finite sample error rates between the scores assumed by the BTL model and those estimated by our algorithm. In particular, the number of samples required to learn the score well with high probability depends on the structure of the comparison graph. When the Laplacian of the comparison graph has a strictly positive spectral gap, e.g., each item is compared to a subset of randomly chosen items, this leads to dependence on the number of samples that is nearly order optimal. Experimental evaluations on synthetic data sets generated according to the BTL model show that our algorithm performs as well as the maximum likelihood estimator for that model and outperforms other popular ranking algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
40. Distributed Welfare Games.
- Author
-
Marden, Jason R. and Wierman, Adam
- Subjects
RESOURCE allocation ,GAME theory ,NASH equilibrium ,UTILITY functions ,ALGORITHMS ,SCALABILITY - Abstract
Game-theoretic tools are becoming a popular design choice for distributed resource allocation algorithms. A central component of this design choice is the assignment of utility functions to the individual agents. The goal is to assign each agent an admissible utility function such that the resulting game possesses a host of desirable properties, including scalability, tractability, and existence and efficiency of pure Nash equilibria. In this paper we formally study this question of utility design on a class of games termed distributed welfare games. We identify several utility design methodologies that guarantee desirable game properties irrespective of the specific application domain. Lastly, we illustrate the results in this paper on two commonly studied classes of resource allocation problems: "coverage" problems and "coloring" problems. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
41. Flexible PMP Approach for Large-Size Cell Formation.
- Author
-
Goldengorin, Boris, Krushinsky, Dmitry, and Slomp, Jannes
- Subjects
INDUSTRIAL engineering ,HEURISTIC programming ,MATHEMATICAL models ,ALGORITHMS ,ERROR analysis in mathematics ,BOOLEAN functions - Abstract
Lately, the problem of cell formation (CF) has gained a lot of attention in the industrial engineering literature. Since it was formulated (more than 50 years ago), the problem has incorporated additional industrial factors and constraints while its solution methods have been constantly improving in terms of the solution quality and CPU times. However, despite all the efforts made, the available solution methods (including those for a popular model based on the p-median problem, PMP) are prone to two major types of errors. The first error (the modeling one) occurs when the intended objective function of the CF (as a rule, verbally formulated) is substituted by the objective function of the PMP. The second error (the algorithmic one) occurs as a direct result of applying a heuristic for solving the PMP. In this paper we show that for instances that make sense in practice, the modeling error induced by the PMP is negligible. We exclude the algorithmic error completely by solving the adjusted pseudo-Boolean formulation of the PMP exactly, which takes less than one second on a general-purpose PC and software. Our experimental study shows that the PMP-based model produces high-quality cells and in most cases outperforms several contemporary approaches. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
42. Decomposition Based Interior Point Methods for Two-Stage Stochastic Convex Quadratic Programs with Recourse.
- Author
-
Mehrotra, Sanjay and Ozevin, M. Gokhan
- Subjects
DECOMPOSITION method ,INTERIOR-point methods ,STOCHASTIC programming ,CONVEX programming ,QUADRATIC programming ,ALGORITHMS - Abstract
Zhao showed that the log barrier associated with the recourse function of two-stage stochastic linear programs behaves as a strongly self-concordant barrier and forms a self-concordant family on the first-stage solutions. In this paper, we show that the recourse function is also strongly self-concordant and forms a self-concordant family for the two-stage stochastic convex quadratic programs with recourse. This allows us to develop Bender's decomposition based linearly convergent interior point algorithms. An analysis of such an algorithm is given in this paper. [ABSTRACT FROM AUTHOR]
- Published
- 2009
- Full Text
- View/download PDF
43. Nonstationary Bandits with Habituation and Recovery Dynamics.
- Author
-
Mintz, Yonatan, Aswani, Anil, Kaminsky, Philip, Flowers, Elena, and Fukuoka, Yoshimi
- Subjects
ROBBERS ,INTERNET advertising ,STATISTICS ,ALGORITHMS ,ESTIMATION theory - Abstract
In many sequential decision-making settings where there is uncertainty about the reward of each action, frequent selection of specific actions may reduce expected reward while choosing less frequently selected actions could lead to an increase. These effects are commonly observed in settings ranging from personalized healthcare interventions and targeted online advertising. To address this problem, the authors propose a new class of models called ROGUE (reducing or gaining unknown efficacy) multiarmed bandits. In the paper, the authors present a maximum likelihood approach to estimate the parameters of these models, and we show that these estimates can be used to construct upper confidence bound algorithms and epsilon-greedy algorithms for optimizing these models with strong theoretical guarantees. The authors conclude with a simulation study to show that these algorithms perform better than current nonstationary bandit algorithms in terms of both cumulative regret and average reward. Many settings involve sequential decision making where a set of actions can be chosen at each time step, each action provides a stochastic reward, and the distribution for the reward provided by each action is initially unknown. However, frequent selection of a specific action may reduce the expected reward for that action, whereas abstaining from choosing an action may cause its expected reward to increase. Such nonstationary phenomena are observed in many real-world settings such as personalized healthcare adherence–improving interventions and targeted online advertising. Though finding an optimal policy for general models with nonstationarity is PSPACE-complete, we propose and analyze a new class of models called reducing or gaining unknown efficacy (ROGUE) bandits, which we show in this paper can capture these phenomena and are amenable to the design of policies with provable properties. We first present a consistent maximum likelihood approach to estimate the parameters of these models and conduct a statistical analysis to construct finite sample concentration bounds. Using this analysis, we develop and analyze two different algorithms for optimizing ROGUE models: an upper confidence bound algorithm (ROGUE-UCB) and an ɛ-greedy algorithm (ɛ-ROGUE). Our theoretical analysis shows that under proper conditions, the ROGUE-UCB and ɛ-ROGUE algorithms can achieve logarithmic in time regret, unlike existing algorithms, which result in linear regret. We conclude with a numerical experiment using real-world data from a personalized healthcare adherence–improving intervention to increase physical activity. In this intervention, the goal is to optimize the selection of messages (e.g., confidence increasing versus knowledge increasing) to send to each individual each day to increase adherence and physical activity. Our results show that ROGUE-UCB and ɛ-ROGUE perform better in terms of aggregated regret and average reward when compared with state-of-the-art algorithms, and in the context of this intervention, the use of ROGUE-UCB increases daily step counts by roughly 1,000 steps a day (about a half-mile more of walking) compared with other algorithms in a simulation experiment. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
44. Technical Note—Cyclic Variables and Markov Decision Processes.
- Author
-
Haviv, Avery
- Subjects
MARKOV chain Monte Carlo ,MARKOV processes ,ALGORITHMS ,MONEY ,COMPUTATIONAL complexity ,HUMAN behavior models - Abstract
Markov decision processes are commonly used to model forward-looking behavior. However, cyclic terms, including seasonality, are often omitted from these models because of the increase in computational burden. This paper develops a cyclic value function iteration (CVFI), an adjustment to the standard value function iteration. By updating states in a specific order, CVFI allows cyclic variables to be included in the state space with no increase in the computational cost. This result is proved theoretically and shown to hold closely in Monte Carlo simulations. In this paper I develop a cyclic value function iteration, which is an adjustment to the standard value function iteration. When using this algorithm, the inclusion of cyclic variables of any size into the state space of an infinite horizon Markov decision process does not increase the computational complexity of solving for the value function. This result is proven theoretically and shown to closely hold in practice using Monte Carlo simulations. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
45. A Fictitious Play Approach to Large-Scale Optimization.
- Author
-
Lambert III, Theodore J., Epelman, Marina A., and Smith, Robert L.
- Subjects
ALGORITHMS ,GAME theory ,MATHEMATICAL optimization ,DISCRETE groups ,SIMULATION methods & models ,MATHEMATICAL models - Abstract
In this paper, we investigate the properties of the sampled version of the fictitious play algorithm, familiar from game theory, for games with identical payoffs, and propose a heuristic based on fictitious play as a solution procedure for discrete optimization problems of the form max {u(y) y = (y
1 ,…yn ∈ y1 x … x yn } i.e., in which the feasible region is a Cartesian product of finite sets y1 i ∈ N = (1, …, n). The contributions of this paper are twofold. In the first part of the paper, we broaden the existing results on convergence properties of the fictitious play algorithm on games with identical payoffs to include an approximate fictitious play algorithm that allows for errors in players 'best replies. Moreover, we introduce sampling-based approximate fictitious play that possesses the above convergence properties, and at the same time provides a computationally efficient method for implementing fictitious play. In the second part of the paper, we motivate the use of algorithms based on sampled fictitious play to solve optimization problems in the above form with particular focus on the problems in which the objective function u(·) comes from a "black box," such as a simulation model, where significant computational effort is required for each function evaluation. [ABSTRACT FROM AUTHOR]- Published
- 2005
- Full Text
- View/download PDF
46. Pricing American Options: A Duality Approach.
- Author
-
Haugh, Martin B. and Kogan, Leonid
- Subjects
OPTIONS (Finance) ,PRICING ,MONTE Carlo method ,DYNAMIC programming ,ALGORITHMS - Abstract
We develop a new method for pricing American options. The main practical contribution of this paper is a general algorithm for constructing upper and lower bounds on the true price of the option using any approximation to the option price. We show that our bounds are tight, so that if the initial approximation is close to the true price of the option, the bounds are also guaranteed to be close. We also explicitly characterize the worst-case performance of the pricing bounds. The computation of the lower bound is straightforward and relies on simulating the suboptimal exercise strategy implied by the approximate option price. The upper bound is also computed using Monte Carlo simulation. This is made feasible by the representation of the American option price as a solution of a properly defined dual minimization problem, which is the main theoretical result of this paper. Our algorithm proves to be accurate on a set of sample problems where we price call options on the maximum and the geometric mean of a collection of stocks. These numerical results suggest that our pricing method can be successfully applied to problems of practical interest. [ABSTRACT FROM AUTHOR]
- Published
- 2004
- Full Text
- View/download PDF
47. Multiairport ground holding problem: A computational evaluation of exact algorithms.
- Author
-
Andreatta, Giovanni and Brunetta, Lorenzo
- Subjects
AIR traffic control ,ALGORITHMS ,COMMERCIAL aeronautics ,AERONAUTICS ,HYPOTHESIS ,AIR travel - Abstract
Congestion in the air traffic network is becoming an increasingly serious problem that causes inconvenience to passengers, losses to airlines and, last but not least, threats to airspace safety. One way of reducing the amount of congestion is to use Ground Holding policies, i.e., to impose on selected aircraft a ground holding prior to their departure so that congestion during peak periods of time may be smoothed away. In this paper we restrict our attention to the Multiairport Ground Holding problem where congestion may arise only at the airports due to limited arrival capacity. There are a few algorithms that under suitable hypotheses, find an "optimal" policy for the Multiairport Ground Holding problem. In this paper we evaluate and compare computationally three of them, namely, the one recently proposed by Vranas, Bertsimas and Odoni, the one suggested by Andreatta and Tidona and that due to Bertsimas and Stock. The computational evaluation is based on two sets of test problems. The first set Consists of seven problems taken from the literature. The second set consists of 32 "realistic" test problems. The results indicate the superiority of the Bertsimas and Stock approach among the three models considered. [ABSTRACT FROM AUTHOR]
- Published
- 1998
- Full Text
- View/download PDF
48. A METHOD TO CALCULATE STEADY-STATE DISTRIBUTIONS OF LARGE MARKOV CHAINS.
- Author
-
Feinberg, Brion N. and Chiu, Samuel S.
- Subjects
ITERATIVE methods (Mathematics) ,NUMERICAL analysis ,ALGORITHMS ,MARKOV processes ,MATHEMATICS ,STOCHASTIC processes - Abstract
This paper develops an efficient iterative algorithm to calculate the steady-state distribution of nearly all irreducible discrete-time Markov chains. Computational experiences suggest that, for large Markovian systems (more than 130 states), the proposed algorithm can be ten times faster than standard Gaussian elimination in finding solutions to an accuracy of 0.1%. The proposed algorithm is developed in three stages. First, we develop a very efficient algorithm for determining steady-state distributions of a restricted class of Markovian systems. A second result establishes a relationship between a general irreducible Markovian system and a system in the restricted class of Markovian systems. Finally, we combine the two results to produce an efficient, iterative algorithm to solve Markov systems. The paper concludes with a discussion of the observed performance of the algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 1987
- Full Text
- View/download PDF
49. A Computer-Assisted System for the Routing and Scheduling of Street Sweepers.
- Author
-
Bodin, Lawrence D. and Kursh, Samuel J.
- Subjects
STREET cleaners ,SANITATION workers ,SCHEDULING ,COMPUTERS ,PRODUCTION scheduling ,COMPUTER assisted instruction ,ALGORITHMS - Abstract
This paper discusses a computer-assisted method for routing and scheduling street sweepers in a municipality. We present the basic structure of this vehicle routing and scheduling problem, formulate the street-sweeper routing problem, and explain the algorithm. The computer implementation based on this algorithm is then described. Computational experience with the system in New York City and Washington, D.C., is presented and the obstacles to and successes with implementation are discussed. [ABSTRACT FROM AUTHOR]
- Published
- 1978
- Full Text
- View/download PDF
50. The Optimal Control of Partially Observable Markov Processes over the Infinite Horizon: Discounted Costs.
- Author
-
Sondik, Edward J.
- Subjects
ALGORITHMS ,MATHEMATICAL optimization ,OPERATIONS research ,MARKOV processes ,STOCHASTIC processes ,COST accounting - Abstract
This paper treats the discounted cost, optimal control problem for Markov processes with incomplete state information. The optimization approach for these partially observable Markov processes is a generalization of the well-known policy iteration technique for finding optimal stationary policies for completely observable Markov processes. The state space for the problem is the space of state occupancy probability distributions (the unit simplex). The development of the algorithm introduces several new ideas, including the class of finitely transient policies, which are shown to possess piecewise linear cost functions. The paper develops easily implemented approximations to stationary policies based on these finitely transient policies and shows that the concave hull of an approximation can be included in the well-known Howard policy improvement algorithm with subsequent convergence. The paper closes with a detailed example illustrating the application of the algorithm to the two-state partially observable Markov process. [ABSTRACT FROM AUTHOR]
- Published
- 1978
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.