1,895 results
Search Results
2. Order allocation for stock cutting in the paper industry
- Author
-
Menon, Syam and Schrage, Linus
- Subjects
Paper industry -- Research -- Analysis ,Management science -- Analysis -- Research ,Logistics -- Research -- Analysis ,Business ,Mathematics ,Analysis ,Research - Abstract
A common problem encountered in paper-production facilities is that of allocating customer orders to machines so as to minimize the total cost of production. It can be formulated as a [...]
- Published
- 2002
3. Analysis of distribution strategies in the industrial paper and plastics industry
- Author
-
Cohen, Morris A., Agrawal, Narenda, Agrawal, Vipul, and Raman, Ananth
- Subjects
Paper industry -- Distribution ,Plastics industry -- Distribution ,Distributors (Commerce) -- Research ,Distribution of goods -- Research ,Business ,Mathematics - Abstract
A study is conducted to examine the costs, benefits and strategic value of the redistribution function in the industrial paper and plastics industry. Specifically, the study seeks to find out what the value-added of the redistribution channel is, how distributors should source their products, how manufacturing companies can use their channel management policies to boost their profits, and how inventory and stock control policies impact the aforementioned issues. Results offer an economic justification for the role of redistributor in the industrial paper and plastics industry. The existence of middlemen in the distribution system is found to be advantageous for all players in the industry. They boost the inventory turns of distributors and allow manufacturers to cater to small distributors. Without redistributors, the costs of distributors would soar.
- Published
- 1995
4. Understanding linear programming modeling through an examination of the early papers on model formulation
- Author
-
Murphy, Frederic H. and Panchanadam, Venkat
- Subjects
Linear programming -- Analysis ,Cognitive psychology -- Analysis ,Business ,Mathematics - Abstract
Literature in cognitive psychology is used to analyze the thought processes behind the development of linear programming (LP) models. Emphasis is on understanding how the authors organize problems and models in their work. A kind of cognitive history of the early years of LP model building is thus drawn. Three hypotheses are investigated, namely, an organization of problems into categories by kinds of transformations, organization by model type and organization of problems and models by industry. Analysis of historical processes confirms that there is transfer and an underlying organization and that there are multiple dimensions forming relationships among problems and models. The three hypotheses forwarded are confirmed to varying degrees.
- Published
- 1997
5. Research strategies used by OR/MS workers as shown by an analysis of papers in flagship journals
- Author
-
Reisman, Arnold and Kirschnick, Frank
- Subjects
Operations research -- Methods ,Management science -- Research ,Business ,Mathematics - Abstract
Seven process categories of research strategies used by operations research (OR) and management science (MS) workers are identified. These strategies are ripple, embedding, bridging, structuring, creative application, statistical modeling and transfer of technology. A sample of articles appearing in the 1992 issues of the flagship journals Operations Research, Management Science and Interfaces is analyzed to determine which of the aforementioned strategies are commonly used by OR/MS researchers. The study also aims to link these research methodologies to the types of research work found at the extremes of Reisman and Kirschnick's (1994) six-point scale for categorizing OR/MS work. The results show that the ripple process is most often applied in theoretical research, while the transfer-of-technology process is most used in true applications.
- Published
- 1995
6. The devolution of OR/MS: implications from a statistical content analysis of papers in flagship journals
- Author
-
Reisman, Arnold and Kirschnick, Frank
- Subjects
Operations research -- Analysis ,Management science -- Analysis ,Business ,Mathematics - Abstract
Some operations research (OR) practitioners have complained about the field's 'devolution' or 'regression.' In the past, the science was market-oriented in that researchers sought solution to real-world problems. However, current OR is input-oriented in that mathematicians search for problems that fit their models. A detailed survey of OR and management science publications is conducted to ascertain the existence of this trend. In particular, the amount of space allocated to theory relative to applications is analyzed for such flagship journals as Operations Research, Management Science and Interfaces, using a five-point classification scale. Comparison of the 1962 and 1992 editions of the first two journals, and the 1972 and 1992 editions of the last, reveal that usage of the terms 'application' and 'data' differ in degree and kind, respectively. The results suggest that OR practitioners need to reassess the profession and chart its future course.
- Published
- 1994
7. Correction to the paper 'optimal product launch times in a duopoly: balancing life-cycle revenues with product cost'
- Author
-
Guseo, Renato and Mortarino, Cinzia
- Subjects
Business ,Mathematics - Abstract
The aim of this note is to correct an error in the formulation of Theorem 1 by Savin and Terwiesch [Savin, S., C. Terwiesch. 2005. Optimal product launch times in [...]
- Published
- 2010
8. Comments on the paper: 'Heuristic and Special Case Algorithms for Dispersion Problems' by S.S. Ravi, D.J. Rosenkrantz, and G.K. Tayi
- Author
-
Tamir, Arie
- Subjects
Algorithms -- Analysis ,Heuristic programming -- Analysis ,Business ,Mathematics - Abstract
Some of the views of S.S. Ravi, D.J. Rosenkrantz and G.K. Tayi regarding algorithms for facility dispersion models are flawed. They provided a Max-Min Facility Dispersion (MMFD) with a performance guarantee of 50% and a Max-Average Facility Dispersion (MAFD) with a 25% performance rate. The heuristic of the MMFD was depicted as more of a general model than what was analyzed in 1991. They also supported the MMFD with the lowest complexity bound in a study conducted by D.W. Wang and Y.S. Kuo in 1988. Moreover, the MAFD was associated with a dynamic programming algorithm considered as a trivial optimal solution. The extension of the one-dimensional version of MAFD to tree networks can also be solved through an approach made in 1991.
- Published
- 1998
9. A dynamic stochastic stock-cutting problem
- Author
-
Krichagina, Elena V., Rubio, Rodrigo, Taksar, Michael I., and Wein, Lawrence M.
- Subjects
Strathmore Paper Co. -- Production management -- 00097709 ,Inventory control -- Models ,Production control -- Models ,Scheduling (Management) -- Models ,Linear programming -- Usage ,Brownian motion -- Usage ,Stochastic programming -- Analysis ,Paper industry -- Production management ,Business ,Mathematics - Abstract
A stock cutting problem was examined in one of the facilities of Strathmore Paper Co, a paper plant that makes different-sized sheets for a finished goods inventory that meets random customer demand. In the factory, the controller makes the decision when to shut down and restart the paper machine, and how to cut finished paper rolls into sheets of paper. A procedure, which involves linear programming and Brownian control, that generates an effective but suboptimal solution was devised. The methodology was found to be easy to use and and simplified the production process since the linear program significantly limits the number of cutting configurations that can be utilized in the Brownian analysis. It performed better than other policies using a larger number of cutting configurations.
- Published
- 1998
10. BPSS: a scheduling support system for the packaging industry
- Author
-
Adler, Leonard, Fraiman, Nelson, Kobacker, Edward, Pinedo, Michael, Plotnicoff, Juan Carlos, and Tso Pang Wu
- Subjects
Packaging industry -- Logistics ,Paper products industry -- Production management ,Production management -- Models ,Scheduling (Management) -- Models ,Business ,Mathematics - Abstract
The architecture of a scheduling support system which utilizes a five-step algorithm that accounts for all jobs at different stages is discussed. The scheduling framework incorporates priority rules designed for a flowshop where at least one phase constitutes a bottleneck. The algorithm has become a composite part of the Bagpak Production Scheduling System (BPSS), which has been in use in two packaging factories. This system includes a data base management and user interface modules. The machine set-up in these plants approximate a flexible flowshop with the stages in series are parallel to the machines at each stage of the production process. The jobs are well-defined in terms of shipping dates and priorities as well as processing and setup times. Schedulers in the two factories using BPSS confirm the system's high degree of usability and accuracy.
- Published
- 1993
11. Analysis of Markov Influence Graphs
- Author
-
Berkhout, Joost and Heidergott, Bernd F.
- Subjects
Markov processes -- Methods -- Reports ,Social networks -- Reports ,Business ,Mathematics - Abstract
The research presented in this paper is motivated by the growing interest in the analysis of networks found in the World Wide Web and of social networks. In this paper, we elaborate on the Kemeny constant as a measure of connectivity of the weighted graph associated with a Markov chain. For finite Markov chains, the Kemeny constant can be computed by means of simple algebra via the deviation matrix and the ergodic projector of the chain. Using this fact, we introduce a new decomposition algorithm for Markov chains that splits the graph the Markov chain is defined on into subgraphs, such that the connectivity of the chain measured by the Kemeny constant is maximally decreased. We discuss applications of our decomposition algorithm to influence ranking in social networks, decomposition of a social network into subnetworks, identification of nearly decomposable chains, and cluster analysis. Keywords: Markov influence graphs * social networks * deviation matrix * Markov multichains * Kemeny decomposition algorithm * nearly decomposable Markov chains, 1. Introduction Consider a directed graph with finite node set S = {1, ..., n} and set of edges E [subset] S x S. Let a Markov chain P be [...]
- Published
- 2019
- Full Text
- View/download PDF
12. Robust Dual Dynamic Programming
- Author
-
Georghiou, Angelos, Tsoukalas, Angelos, and Wiesemann, Wolfram
- Subjects
Dynamic programming -- Methods ,Stochastic programming -- Methods ,Algorithms -- Usage ,DNA polymerases -- Usage ,Algorithm ,Business ,Mathematics - Abstract
Multistage robust optimization problems, where the decision maker can dynamically react to consecutively observed realizations of the uncertain problem parameters, pose formidable theoretical and computational challenges. As a result, the existing solution approaches for this problem class typically determine suboptimal solutions under restrictive assumptions. In this paper, we propose a robust dual dynamic programming (RDDP) scheme for multistage robust optimization problems. The RDDP scheme takes advantage of the decomposable nature of these problems by bounding the costs arising in the future stages through lower and upper cost-to-go functions. For problems with uncertain technology matrices and/or constraint right-hand sides, our RDDP scheme determines an optimal solution in finite time. Also, if the objective function and/or the recourse matrices are uncertain, our method converges asymptotically (but deterministically) to an optimal solution. Our RDDP scheme does not require a relatively complete recourse, and it offers detenriinistic upper and lower bounds throughout the execution of the algorithm. We show the promising performance of our algorithm in a stylized inventory management problem. Keywords: robust optimization * multistage problems * dual dynamic programming * error bounds, 1. Introduction In this paper, we study multistage robust optimization problems of the form [mathematical expression not reproducible] (1) where the parameters [[xi].sub.t] are revealed at the beginriing of stage [...]
- Published
- 2019
- Full Text
- View/download PDF
13. OR Forum--Public Health Preparedness: Answering (Largely Unanswerable) Questions with Operations Research--The 2016-2017 Philip McCord Morse Lecture
- Author
-
Brandeau, Margaret L.
- Subjects
United States. Department of Homeland Security -- Research ,Bioterrorism -- Research ,National security -- Research ,Public health -- Analysis -- Research ,Business ,Mathematics ,Stanford University -- Research - Abstract
Public health security--achieved by effectively preventing, detecting, and responding to events that affect public health such as bioterrorism, disasters, and naturally occurring disease outbreaks--is a key aspect of national security. However, effective public health preparedness depends on answering largely unanswerable questions. For example: What is the chance of a bioterror attack in the United States in the next five years? What is the chance of an anthrax attack? What might be the location and magnitude of such an attack? This paper describes how OR-based analyses can provide insight into complex public health preparedness planning problems--and thus support good decisions. Three examples from the author's research are presented: logistics of response to an anthrax attack, prepositioning of medical countermeasures for anthrax, and stockpiling decisions for the United States' Strategic National Stockpile. Keywords: professional addresses * government planning * ORMS philosophy, 1. Introduction This paper discusses public health preparedness and, in particular, how we can use operations research to help obtain answers to questions that are in many ways unanswerable. Public [...]
- Published
- 2019
- Full Text
- View/download PDF
14. Input-Output Uncertainty Comparisons for Discrete Optimization via Simulation
- Author
-
Song, Eunhye and Nelson, Barry L.
- Subjects
Uncertainty -- Analysis ,Decision-making -- Analysis -- Models ,Mathematical optimization -- Analysis ,Business ,Mathematics - Abstract
When input distributions to a simulation model are estimated from real-world data, they naturally have estimation error causing input uncertainty in the simulation output. If an optimization via simulation (OvS) method is applied that treats the input distributions as 'correct,' then there is a risk of making a suboptimal decision for the real world, which we call input model risk. This paper addresses a discrete OvS (DOvS) problem of selecting the real-world optimal from among a finite number of systems when all of them share the same input distributions estimated from common input data. Because input uncertainty cannot be reduced without collecting additional real-world data--which may be expensive or impossible--a DOvS procedure should reflect the limited resolution provided by the simulation model in distinguishing the real-world optimal solution from the others. In light of this, our input-output uncertainty comparisons (IOU-C) procedure focuses on comparisons rather than selection: it provides simultaneous confidence intervals for the difference between each system's real-world mean and the best mean of the rest with any desired probability, while accounting for both stochastic and input uncertainty. To make the resolution as high as possible (intervals as short as possible) we exploit the common input data effect to reduce uncertainty in the estimated differences. Under mild conditions we prove that the IOU-C procedure provides the desired statistical guarantee asymptotically as the real-world sample size and simulation effort increase, but it is designed to be effective in finite samples. Funding: This study was supported by the National Science Foundation [Grant CMMI-1068473]. Supplemental Material: The electronic companion of this paper is available at https://doi.org/10.1287/opre.2018.1796. Keywords: optimization via simulation under input uncertainty * common-input-data effect * multiple comparisons with the best, 1. Introduction Because of the flexibility of simulation, optimization via simulation (OvS) is a widely accepted tool to improve system performance. Real-world problems typically involve stochastic processes (e.g., demand for [...]
- Published
- 2019
- Full Text
- View/download PDF
15. Dynamic Volunteer Staffing in Multicrop Gleaning Operations
- Author
-
Ata, Baris, Lee, Deishin, and Sonmez, Erkut
- Subjects
United States. Department of Agriculture -- Powers and duties ,Gleaning -- Analysis -- Methods -- Usage ,Food wastes -- Analysis -- Control -- Waste management ,Agricultural laborers -- Practice ,Business ,Mathematics - Abstract
Gleaning programs organize volunteer gleaners to harvest a variety of leftover crops that are donated by farmers for the purpose of feeding food-insecure individuals. Thus, the gleaning process simultaneously reduces food waste and food insecurity. However, the operationalization of this process is challenging because gleaning relies on two uncertain sources of input: the food and labor supplies. The purpose of this paper is to help gleaning organizations increase the (value-weighted) volume of fresh food gleaned by better managing the uncertainties in the gleaning operation. We develop a model to capture the uncertainties in food and labor supplies and seek a dynamic volunteer-staffing policy that maximizes the payoff associated with the amount of food gleaned. The exact analysis of the staffing problem seems intractable. Therefore, we resort to an approximation in the heavy traffic regime. In that regime, we characterize the system dynamics of the multicrop gleaning operation and derive the optimal staffing policy in closed form. The optimal policy is a nested threshold policy that specifies the staffing level for each class of donation (i.e., a donation of a particular crop type and donation size). The policy depends on the number of available gleaners and the backlog of gleaning donations. A numerical study using data calibrated from a gleaning organization in the Boston area shows that the dynamic staffing policy we propose can recover approximately 10% of the volume lost when the gleaning organization uses a static policy. To achieve this improvement, no capital or major process changes would be required--only some small changes to the staffing level requests. History: An earlier version of this paper was circulated with the title 'Dynamic Staffing of Volunteer Gleaning Operations' and is available from SSRN: https://ssrn.com/abstract=2873250 or http://dx.doi.org/10.2139/ssrn.2873250. Funding: This work is supported by the Neubauer Family Foundation at the University of Chicago Booth School of Business. Supplemental Material: The online appendices are available at https://doi.org/10.1287/opre.2018.1792. Keywords: gleaning * volunteer * food bank * food waste * food insecurity * dynamic control, 1. Introduction The practice of gleaning combines two societal problems to create an elegant solution for both. Gleaning dates back to ancient times when landowners allowed the poor to gather [...]
- Published
- 2019
- Full Text
- View/download PDF
16. Divide and Conquer: Recursive Likelihood Function Integration for Hidden Markov Models with Continuous Latent Variables
- Author
-
Reich, Gregor
- Subjects
Likelihood functions -- Evaluation ,Hidden Markov models -- Evaluation ,Latent variables -- Usage ,Business ,Mathematics - Abstract
This paper develops a method to efficiently estimate hidden Markov models with continuous latent variables using maximum likelihood estimation. To evaluate the (marginal) likelihood function, I decompose the integral over the unobserved state variables into a series of lower dimensional integrals, and recursively approximate them using numerical quadrature and interpolation. I show that this procedure has very favorable numerical properties: First, the computational complexity grows linearly in the number of periods, making the integration over hundreds and thousands of periods feasible. Second, I prove that the numerical error accumulates sublinearly in the number of time periods integrated, so the total error can be well controlled for a very large number of periods using, for example, Gaussian quadrature and Chebyshev polynomials. I apply this method to the bus engine replacement model of Rust [Econometrica 55(5): 999-1033] to verify the accuracy and speed of the procedure in both actual and simulated data sets. Supplemental Material: The e-companion is available at https://doi.org/10.1287/opre.2018.1750. Keywords: hidden Markov models * maximum likelihood estimation * numerical integration * interpolation, 1. Introduction This paper develops a method to efficiently estimate hidden Markov models with continuous latent variables using maximum likelihood estimation (MLE). To evaluate the (marginal) likelihood function, I decompose [...]
- Published
- 2018
- Full Text
- View/download PDF
17. Approximation Algorithms for a Class of Stochastic Selection Problems with Reward and Cost Considerations
- Author
-
Strinka, Zohar M.A. and Romeijn, H. Edwin
- Subjects
Mathematical optimization -- Analysis ,Learning models (Stochastic processes) -- Analysis ,Logistics -- Analysis ,Algorithms -- Analysis ,Algorithm ,Business ,Mathematics - Abstract
We study a class of problems with both binary selection decisions and associated continuous choices that result in stochastic rewards and costs. The rewards are received based on the decision maker's selection, and the costs depend both on the decisions and realizations of the stochastic variables. We consider a family of risk-based objective functions that contains the traditional risk-neutral expected-value objective as a special case. A combination of rounding and sample average approximation is used to produce solutions that are guaranteed to be close to the optimal solution with high probability. We also provide an empirical comparison of the performance of the algorithms on a set of randomly generated instances of a supply chain example problem. The computational results illustrate the theoretical claims in the paper that, for this problem, high-quality solutions can be found with small computational effort. Funding: This research was conducted with government support under and awarded by a Department of Defense (DoD), Air Force Office of Scientific Research, National Defense Science and Engineering Graduate (NDSEG) Fellowship, 32 CFR 168a. Keywords: two-stage stochastic optimization * selection * resource allocation * approximation algorithms, 1. Introduction In this paper we study a class of two-stage stochastic selection problems with recourse and develop approximation algorithms to efficiently solve them. In particular, a subset of options [...]
- Published
- 2018
- Full Text
- View/download PDF
18. Technical Note--On the Relation Between Several Discrete Choice Models
- Author
-
Feng, Guiyun, Li, Xiaobo, and Wang, Zizhuo
- Subjects
Management science -- Innovations ,Finite sets -- Usage ,Mathematical optimization -- Usage ,Business ,Mathematics - Abstract
In this paper, we study the relationship between several well known classes of discrete choice models, i.e., the random utility model (RUM), the representative agent model (RAM), and the semiparametric choice model (SCM). Using a welfare-based model as an intermediate, we show that the RAM and the SCM are equivalent. Furthermore, we show that both models as well as the welfare-based model strictly subsume the RUM when there are three or more alternatives, while the four are equivalent when there are only two alternatives. Thus, this paper presents a complete picture of the relationship between these choice models. Funding: The research of the third author is supported by the National Science Foundation [Grant CMMI-1462676]. Keywords: welfare function * random utility model * representative agent model * semiparametric choice model, 1. Introduction In this paper, we study the discrete choice models. Discrete choice models are used to model choices made by people among a finite set of alternatives. As examples, [...]
- Published
- 2017
- Full Text
- View/download PDF
19. Surviving a National Football League Survivor Pool
- Author
-
Bergman, David and lmbrogno, Jason
- Subjects
Sports associations -- Management ,Football (Professional) -- Competitions ,Company business management ,Business ,Mathematics ,National Football League -- Management - Abstract
Abstract. In this paper, an analytical approach to National Football League (NFL) survival pools is investigated. This paper introduces into the literature NFL survival pools and presents optimization models for [...]
- Published
- 2017
- Full Text
- View/download PDF
20. An exact method for the capacitated location-routing problem
- Author
-
Baldacci, Roberto, Mingozzi, Aristide, and Calvo, Roberto Wolfler
- Subjects
Dynamic programming -- Analysis ,Location theory -- Usage -- Methods ,Transportation -- Research ,Algorithms -- Methods ,Algorithm ,Business ,Mathematics - Abstract
The capacitated location-routing problem (LRP) consists of opening one or more depots on a given set of a-priori defined depot locations, and designing, for each opened depot, a number of routes in order to supply the demands of a given set of customers. With each depot are associated a fixed cost for opening it and a capacity that limits the quantity that can be delivered to the customers. The objective is to minimize the sum of the fixed costs for opening the depots and the costs of the routes operated from the depots. This paper describes a new exact method for solving the LRP based on a set-partitioning-like formulation of the problem. The lower bounds produced by different bounding procedures, based on dynamic programming and dual ascent methods, are used by an algorithm that decomposes the LRP into a limited set of multicapacitated depot vehicle-routing problems (MCDVRPs). Computational results on benchmark instances from the literature show that the proposed method outperforms the current best-known exact methods, both for the quality of the lower bounds achieved and the number and the dimensions of the instances solved to optimality. Subject classifications: location routing; set partitioning; dual ascent; dynamic programming. Area of review: Transportation. History: Received July 2009; revisions received July 2010, October 2010; accepted January 201 1., 1. Introduction The capacitated location-routing problem (LRP) considered in this paper can be described as follows. A complete and undirected graph G = (V', E) is given, where V' = [...]
- Published
- 2011
21. Polynomial-time algorithms for stochastic uncapacitated lot-sizing problems
- Author
-
Guan, Yongpei and Miller, Andrew J.
- Subjects
Real estate development -- Analysis ,Algorithms -- Analysis ,Business ,Mathematics ,Algorithm ,Analysis - Abstract
In 1958, Wagner and Whitin published a seminal paper on the deterministic uncapacitated lot-sizing problem, a fundamental model that is embedded in many practical production planning problems. In this paper, [...]
- Published
- 2008
- Full Text
- View/download PDF
22. Polynomial-time algorithms for stochastic uncapacitated lot-sizing problems
- Author
-
Guan, Yongpei and Miller, Andrew J.
- Subjects
Real estate development -- Analysis ,Economic lot size -- Analysis ,Stochastic analysis -- Analysis ,Business ,Mathematics ,Analysis - Abstract
In 1958, Wagner and Whitin published a seminal paper on the deterministic uncapacitated lot-sizing problem, a fundamental model that is embedded in many practical production planning problems. In this paper, [...]
- Published
- 2008
23. Supply function equilibrium in a constrained transmission system
- Author
-
Wilson, Robert
- Subjects
Game theory -- Analysis ,Equilibrium (Economics) -- Analysis ,Business ,Mathematics ,Analysis - Abstract
This paper characterizes a supply function equilibrium in an auction market constrained by limited capacities of links in a transportation network and limited input/output capacities of participants. The formulation is adapted to a wholesale spot market for electricity managed by the operator of the transmission system. The results are derived using the calculus of variations to obtain the Euler conditions and the transversality conditions that characterize a Nash equilibrium in an auction in which bids are as supply functions, and quantities and payments are based either on nodal prices or pay-as-bid. Subject classifications: industries: electric; games/group decisions: bidding/auctions; networks/graphs: applications. Area of review: Environment, Energy, and Natural Resources., 1. Introduction This paper derives conditions that characterize equilibria in an auction market with multilateral trading of the kind conducted by system operators in the electricity industry. These are wholesale [...]
- Published
- 2008
24. Optimal dynamic trading strategies with risk limits
- Author
-
Cuoco, Domenico, He, Hua, and Isaenko, Sergei
- Subjects
Economic conditions -- Forecasts and trends ,Investment companies -- Management -- Forecasts and trends ,Business ,Mathematics ,Company business management ,Market trend/market analysis ,Management ,Forecasts and trends - Abstract
Value at Risk (VaR) has emerged in recent years as a standard tool to measure and control the risk of trading portfolios. Yet, existing theoretical analysis of the optimal behavior of a trader subject to VaR limits has produced a negative view of VaR as a risk-control tool. In particular, VaR limits have been found to induce increased risk exposure in some states and an increased probability of extreme losses. However, these conclusions are based on models that are either static or dynamically inconsistent. In this paper, we formulate a dynamically consistent model of optimal portfolio choice subject to VaR limits and show that the concerns expressed in earlier papers do not apply if, consistently with common practice, the VaR limit is reevaluated dynamically. In particular, we find that the optimal risk exposure of a trader subject to a VaR limit is always lower than that of an unconstrained trader and that the probability of extreme losses is also lower. We also consider risk limits formulated in terms of tail conditional expectation (TCE), a coherent risk measure often advocated as an alternative to VaR, and show that in our dynamic setting it is always possible to transform a TCE limit into an equivalent VaR limit, and conversely. Subject classifications: investment; management; portfolio. Area of review: Finance., 1. Introduction Investment firms customarily limit the discretionality of their traders by imposing limits on the risk of trading portfolios. Because Value at Risk (VaR) has gained in recent years [...]
- Published
- 2008
25. The OR/MS ecosystem: strengths, weaknesses, opportunities, and threats
- Author
-
Sodhi, ManMohan S. and Tang, Christopher S.
- Subjects
Educators -- Evaluation -- Research -- Reports ,Business ,Mathematics ,Evaluation ,Research ,Reports - Abstract
We believe that research, teaching, and practice are becoming increasingly disengaged from one another in the OR/MS ecosystem. This ecosystem comprises researchers, educators, and practitioners in its core along with end users, universities, and funding agencies. Continuing disengagement will result in OR/MS occupying only niche areas and disappearing as a distinct field even though its tools would live on. To understand the reasons for this disengagement better and to engender discussion among academics and practitioners on how to counter it, we present the ecosystem's strengths, weaknesses, opportunities, and threats. Incorporated in this paper are insights from a cluster of sessions at the 2006 INFORMS meeting in Pittsburgh ('Where Do We Want to Go in OR/MS?') and from the literature. Subject classifications: operations research; management science; SWOT analysis; ecosystem; change management. Area of review: OR Forum., 1. Introduction The purpose of this paper is to stimulate further discussion among OR/MS academics and practitioners on how to overcome the challenges that the OR/MS community is facing, especially [...]
- Published
- 2008
26. Solving linear cost dynamic lot-sizing problems in O(n log n) time
- Author
-
Ahuja, Ravindra K. and Hochbaum, Dorit S.
- Subjects
Economics -- Methods ,Economic lot size -- Evaluation -- Usage ,Algorithms -- Usage ,Business ,Mathematics ,Algorithm ,Evaluation ,Usage - Abstract
In this paper, we study capacitated dynamic lot-sizing problems with or without backorders, under the assumption that production costs are linear, that is, there are no setup costs. These two dynamic lot-sizing problems (with or without backorders) are minimum-cost flow problems on an underlying network that possess a special structure. We show how the well-known successive shortest-path algorithm for the minimum-cost flow problem can be used to solve these problems in O([n.sup.2]) time, where n is the number of periods in the dynamic lot-sizing problems, and how, with the use of dynamic trees, we can solve these problems in O(n log n) time. Our algorithm also extends to the dynamic lot-sizing problem with integer variables and convex production costs with running time O(n log n log U), where U is the largest demand value. Subject classifications: inventory/production: lot sizing; production/scheduling; networks/graphs: flow algorithms. Area of review: Optimization., 1. Introduction In this paper, we study the lot-sizing problem (Wagner and Whitin 1958), which has time-varying demands, costs, and capacities, but there are no setup costs. In today's manufacturing [...]
- Published
- 2008
27. The access-control problem on capacitated FIFO networks with unique O-D paths is hard
- Author
-
Erera, Alan L., Daganzo, Carols F., and Lovell, David J.
- Subjects
Management science -- Analysis ,Business ,Mathematics ,Analysis - Abstract
This paper is concerned with the performance of multicommodity capacitated networks in a deterministic but time-dependent environment. For a given time-dependent origin-destination table, this paper asks if it is easy [...]
- Published
- 2002
28. The natural drift: what happened to operations research?
- Author
-
Corbett, Charles J. and Wassenhove, Luk N. van
- Subjects
Operations research -- Models ,Management science -- Models ,Business ,Mathematics - Abstract
Operations research and management science (OR/MS) experts believe that a a significant discrepancy has developed between OR/MS theories and actual practice. Experts point to the dearth of practice-oriented papers because most papers published in the Harvard Business Review and similar professional publications deal with managerial, and hence, the theoretical aspect of OR/MS. This development can be attributed to a natural drift where the traditional OR, which is largely concerned with practical applications, has remained underdeveloped compared to the theoretical side of the discipline. Practitioners should view this as a transitory stage which would allow the OR/MS discipline to become a better management tool. OR/MS work must insure a balance of orientations while emphasizing flexibility.
- Published
- 1993
29. Military operations research: presentation at ORSA/TIMS meetings
- Author
-
Hartley, Dean S., III
- Subjects
Operations research -- Analysis ,Military planning -- Analysis ,Military art and science -- Research ,Business ,Mathematics - Abstract
The applications and methodologies of military operations research (OR) were analyzed based on the papers presented during the Operations Research Society of America and The Institute of Management Sciences meetings from May 1984 to May 1991. It was observed that majority of papers in military OR came from private corporations. However, a combination of the air force, navy and army services show that these would comprise 35% of papers. Most topics centered on military warfare, with Heuristics approaches used in 25% of papers recieved. Cross-tabulation analyses were performed for methods groups within application groups and vice versa and application groups within organizational groups. Trends in military OR were also analyzed and results show increased studies in manpower, policy and warfare and decreased use of Heuristics approaches.
- Published
- 1992
30. The lessons of flowshop scheduling research
- Author
-
Dudek, R.A., Panwalkar, S.S., and Smith, M.L.
- Subjects
Operations research -- Evaluation ,Scheduling (Management) -- Research ,Business ,Mathematics - Abstract
S.M. Johnson's paper on flowshop sequencing published in a 1954 issue of the 'Naval Research Logistics Quarterly' is reviewed. The focus of the review is the section of the paper dealing with the static deterministic case with the processing times of jobs being fixed. After analyzing the results of Johnson's study, it is concluded that the research does not make any significant contribution to the field of operations research since its findings do not have important applications in industry. To ensure the usefulness of studies, operations researchers are urged to assess the possible value of their study, its potential applications and its usefulness in solving existing problems.
- Published
- 1992
31. Ford Whitman Harris and the economic order quantity model
- Author
-
Erlenkotter, Donald
- Subjects
Operations research -- History ,Inventory control -- Models ,Economic lot size -- Models ,Business ,Mathematics - Abstract
Ford Whitman Harris presented the economic order quantity (EOQ) model in 1913. The basic structure of the EOQ model is now seen as obvious. The EOQ model is a simple square-root formula for determining optimal ordering quantity under the assumption of a continuous rate of demand and the need to balance tangible ordering costs against intangible inventory costs. Harris' 1913 EOQ model paper apparently was forgotten until its rediscovery in 1988. Before the discovery, there had been confusion over the model's origins, which is surprising since the paper was widely circulated at the time. Harris was an engineer and inventor who made many contributions to management research and patent law.
- Published
- 1990
32. Disruption Risk Mitigation in Supply Chains: The Risk Exposure Index Revisited
- Author
-
Gao, Sarah Yini, Simchi-Levi, David, Teo, Chung-Piaw, and Yan, Zhenzhen
- Subjects
Supply chains -- Reports ,Risk management -- Reports ,Risk management ,Business ,Mathematics - Abstract
A novel approach has been proposed in the literature using the time-to-recover (TTR) parameters to analyze the risk-exposure index (REI) of supply chains under disruption. This approach is able to capture the cascading effects of disruptions in the supply chains, albeit in simplified environments; TTRs are deterministic, and at most, one node in the supply chain can be disrupted. In this paper, we propose a new method to integrate probabilistic assessment of disruption risks into the REI approach and measure supply chain resiliency by analyzing the worst-case conditional value at risk of total lost sales under disruptions. We show that the optimal strategic inventory positioning strategy in this model can be fully characterized by a conic program. We identify appropriate cuts that can be added to the formulation to ensure zero duality gap in the conic program. In this way, the optimal primal and dual solutions to the conic program can be used to shed light on comparative statics in the supply chain risk mitigation problem. This information can help supply chain risk managers focus their mitigation efforts on critical suppliers and/or installations that will have a greater impact on the performance of the supply chain when disrupted. Supplemental Material: The e-companion is available at https://doi.org/10.1287/opre.2018.1776. Keywords: supply chain risk management * distributionally robust optimization * time to survive * sensitivity analysis * completely positive programming, Limited resources mean it is essential to focus risk management efforts where they are most needed and will deliver the biggest benefits. --Geraint John, Senior Vice President, Research, SCM World, [...]
- Published
- 2019
- Full Text
- View/download PDF
33. Exact First-Choice Product Line Optimization
- Author
-
Bertsimas, Dimitris and Misic, Velibor V.
- Subjects
Consumer preferences -- Analysis ,Backup software -- Product introduction ,Mathematical optimization -- Usage ,Product lines -- Reports ,Backup software ,Business ,Mathematics - Abstract
A fundamental problem faced by firms is that of product line design: given a set of candidate products that may be offered to a collection of customers, what subset of those products should be offered to maximize the profit that is realized when customers make purchases according to their preferences? In this paper, we consider the product line design problem when customers choose according to a first-choice rule and present a new mixed-integer optimization formulation of the problem. We theoretically analyze the strength of our formulation and show that it is stronger than alternative formulations that have been proposed in the literature, thus contributing to a unified understanding of the different formulations for this problem. We also present a novel solution approach for solving our formulation at scale, based on Benders decomposition, which exploits the surprising fact that Benders cuts for both the relaxation and the integer problem can be generated in a computationally efficient manner. We demonstrate the value of our formulation and Benders decomposition approach through two sets of experiments. In the first, we use synthetic instances to show that our formulation is computationally tractable and can be solved an order of magnitude faster for small- to medium-scale instances than the alternate, previously proposed formulations. In the second, we consider a previously studied product line design instance based on a real conjoint data set, involving over 3,000 candidate products and over 300 respondents. We show that this problem, which required a week of computation time to solve in prior work, is solved by our approach to full optimality in approximately 10 minutes. Funding: V. V. Misic was supported by a PGS-D award from the Natural Sciences and Engineering Research Council of Canada. Supplemental Material: The e-companion is available at https://doi.org/10.1287/opre.2018.1825. Keywords: product line design * first-choice models * mixed-integer optimization * Benders decomposition, 1. Introduction A key problem in marketing is to decide on what products to offer in the presence of customers who choose. The problem can be generally stated as follows: [...]
- Published
- 2019
- Full Text
- View/download PDF
34. Coordinated Patient Appointment Scheduling for a Multistation Healthcare Network
- Author
-
Wang, Dongyang, Muthuraman, Kumar, and Morrice, Douglas
- Subjects
United States. Veterans Health Administration -- Services ,Health care reform -- Research -- Reports ,Clinics -- Reports ,Business ,Mathematics - Abstract
Current healthcare reforms advocate significantly to improve the coordination of services around a patient-centric model. With most patient care delivered through outpatient services, the need to coordinate scheduling between different services in a hospital or colocated clinics becomes central to successful reform. Currently, outpatient services require independent appointment decisions, and the coordination is left to the patient. This approach causes several inefficiencies, including an increase in missed appointments and unacceptable access delays. This paper proposes a multistation network model that carefully strikes a balance between assumptions that allow tractability and assumptions that disallow real-world adoption. To allow for real-world applicability, we consider sequential appointment scheduling in a network of stations with exponential service times, no-show possibilities, and overbooking. We argue and present evidence that a heuristic myopic scheduling policy is close to optimal. However, because it is not simple enough for practical implementation, we explore a sequence of approximations and find one that offers a significant computational advantage. We also provide several managerial insights and discuss how network structures affect complexity. Keywords: healthcare * scheduling * multistation * coordination * patient-centered, 1. Introduction Annual spending on health in the United States is projected to grow 5.8% each year between 2014 and 2024. This growth rate is 1.1 times faster than the [...]
- Published
- 2019
- Full Text
- View/download PDF
35. Approximations to Stochastic Dynamic Programs via Information Relaxation Duality
- Author
-
Balseiro, Santiago R. and Brown, David B.
- Subjects
Stochastic approximation -- Methods ,Decision-making -- Methods ,Heuristic programming -- Usage ,Sequential analysis -- Methods ,Business ,Mathematics - Abstract
In the analysis of complex stochastic dynamic programs, we often seek strong theoretical guarantees on the suboptimality of heuristic policies. One technique for obtaining performance bounds is perfect information analysis: this approach provides bounds on the performance of an optimal policy by considering a decision maker who has access to the outcomes of all future uncertainties before making decisions, that is, fully relaxed nonanticipativity constraints. A limitation of this approach is that in many problems perfect information about uncertainties is quite valuable, and thus, the resulting bound is weak. In this paper, we use an information relaxation duality approach, which includes a penalty that punishes violations of the nonanticipativity constraints, to derive stronger analytical bounds on the suboptimality of heuristic policies in stochastic dynamic programs that are too difficult to solve. The general framework we develop ties the heuristic policy and the performance bound together explicitly through the use of an approximate value function: heuristic policies are greedy with respect to this approximation, and penalties are also generated in a specific way using this approximation. We apply this approach to three challenging problems: stochastic knapsack problems, stochastic scheduling on parallel machines, and sequential search problems. In each of these problems, we consider a greedy heuristic policy generated by an approximate value function and a corresponding penalized perfect information bound. We then characterize the gap between the performance of the policy and the information relaxation bound in each problem; the results imply asymptotic optimality of the heuristic policy for specific 'large' regimes of interest. Supplemental Material: The online appendices are available at https://doi.org/10.1287/opre.2018.1782. Keywords: dynamic programming * greedy heuristic policies * information relaxation duality * asymptotic optimality * stochastic knapsack problems * stochastic scheduling * sequential search problems, 1. Introduction Dynamic programming (DP) is a powerful and widely used framework for studying sequential decision making in the face of uncertainty. Unfortunately, stochastic dynamic programs are often far too [...]
- Published
- 2019
36. Optimal Design of Process Flexibility for General Production Systems
- Author
-
Chen, Xi, Ma, Tengyu, Zhang, Jiawei, and Zhou, Yuan
- Subjects
Flexible manufacturing systems -- Methods -- Analysis -- Usage -- Design and construction ,System design -- Models ,Systems analysis -- Models ,Flexible assembly systems -- Methods -- Analysis -- Usage -- Design and construction ,System design ,Business ,Mathematics - Abstract
Process flexibility is widely adopted as an effective strategy for responding to uncertain demand. Many algorithms for constructing sparse flexibility designs with good theoretical guarantees have been developed for balanced and symmetrical production systems. These systems assume that the number of plants equals the number of products, that supplies have the same capacity, and that demands are independently and identically distributed. In this paper we relax these assumptions and consider a general class of production systems. We construct a simple flexibility design to fulfill (1 - effraction of expected demand with high probability where the average degree is 0(ln(l/e)). To motivate our construction, we first consider a natural weighted probabilistic construction from the existing literature where the degree of each node is proportional to its expected capacity. However, this strategy is shown to be suboptimal. To obtain an optimal construction, we develop a simple yet effective thresholding scheme. The analysis of our approach extends the classic analysis of expander graphs by overcoming several technical difficulties. Our approach may prove useful in other applications that require expansion properties of graphs with nonuniform degree sequences. Supplemental Material: The e-companion is available at https://doi.org/10.1287/opre.2018.1780. Keywords: flexible manufacturing * graph expanders * thresholding * weighted probabilistic construction, 1. Introduction Process flexibility (a.k.a., capacity pooling) is a successful operational strategy in manufacturing industries to hedge against demand uncertainty. The classic work of Jordan and Graves (1995) models a [...]
- Published
- 2019
- Full Text
- View/download PDF
37. Travel Time Estimation in the Age of Big Data
- Author
-
Bertsimas, Dimitris, Delarue, Arthur, Jaillet, Patrick, and Martin, Sebastien
- Subjects
Traffic estimation -- Methods ,Big data -- Usage ,Time travel -- Information management ,Company systems management ,Business ,Mathematics - Abstract
Twenty-first century urban planners have identified the understanding of complex city traffic patterns as a major priority, leading to a sharp increase in the amount and the diversity of traffic data being collected. For instance, taxi companies in an increasing number of major cities have started recording metadata for every individual car ride, such as its origin, destination, and travel time. In this paper, we show that we can leverage network optimization insights to extract accurate travel time estimations from such origin-destination data, using information from a large number of taxi trips to reconstruct the traffic patterns in an entire city. We develop a method that tractably exploits origin-destination data, which, because of its optimization framework, could also take advantage of other sources of traffic information. Using synthetic data, we establish the robustness of our algorithm to high variance data, and the interpretability of its results. We then use hundreds of thousands of taxi travel time observations in Manhattan to show that our algorithm can provide insights about urban traffic patterns on different scales and accurate travel time estimations throughout the network. Funding: This research was funded in part by the Office of Naval Research [Grants N00014-12-1-0999, N00014-15-1-2083, and N00014-16-1-2786]. Keywords: cost function estimation * second-order cone optimization * inverse shortest path length problem * large data set, 1. Introduction In today's increasingly dense urban landscapes, traffic congestion is an ever more prevalent issue. As flows of goods and people increase, billions of dollars in potential savings are [...]
- Published
- 2019
- Full Text
- View/download PDF
38. Competitive Facility Location with Selfish Users and Queues
- Author
-
Dan, Teodora and Marcotte, Patrice
- Subjects
Industrial locations -- Analysis ,Decision-making -- Analysis -- Models ,Algorithms -- Analysis ,Algorithm ,Business ,Mathematics - Abstract
In a competitive environment, we consider the problem faced by a service firm that makes decisions with respect to both the location and service levels of its facilities, taking into account that users patronize the facility that maximizes their individual utility, expressed as the sum of travel time, queueing delay, and a random term. This situation can be modelled as a bilevel program that involves discrete and continuous variables as well as linear and nonlinear (convex and nonconvex) functions. We design for its solution an algorithm based on piecewise linear approximation as well as a matheuristic that exploits the very structure of the problem. Funding: This work was supported by the Natural Sciences and Engineering Research Council of Canada [Grant 5789-2011 RGPIN]. Supplemental Material: The online appendix is available at https://doi.org/10.1287/opre.2018.1781. Keywords: location * bilevel programming * equilibrium * queueing * nonconvex, 1. Introduction 1.1. Contribution of This Paper Although the literature concerning discrete facility location is vast, few studies have focused on user choice, where the latter frequently involves congestion, either [...]
- Published
- 2019
- Full Text
- View/download PDF
39. Technical Note--Managing Inventory for Firms with Trade Credit and Deficit Penalty
- Author
-
Luo, Wei and Shang, Kevin H.
- Subjects
Inventory control -- Planning -- Finance ,Debt management -- Planning ,Trade credit -- Usage ,Working capital -- Planning ,Company business planning ,Company financing ,Business ,Mathematics - Abstract
This paper considers a firm that periodically orders inventory to satisfy demand in a finite horizon. The firm operates under two-level trade credit--that is, it offers trade credit to its customer while receiving one from its supplier. In addition to standard inventory-related costs, the firm also incurs periodic cash-related costs, which include a deficit penalty cost due to cash shortage and an interest gain (negative cost) due to excess cash after inventory payments. The objective is to obtain an inventory policy that maximizes the firm's working capital at the end of the horizon. We show that this problem is equivalent to one that minimizes the total inventory- and cash-related costs within the horizon. For this general model, we prove that a state-dependent policy is optimal. To facilitate implementation and reveal insights, we consider a simplified model in which a myopic policy is optimal under nondecreasing demand. A numerical study suggests that this myopic policy is an effective heuristic for the original system. The heuristic policy generalizes the classic base-stock policy and resembles practical working capital management under which a firm orders according to its working capital level. The policy parameters have closed-form expressions, which show the impact of demand and cost parameters on the inventory decision. Our study assesses the value of considering financial flows when a firm makes the inventory decision and reveals insights consistent with empirical findings. Supplemental Material: The online appendices are available at https://doi.org/10.1287/opre.2018.1787. Keywords: trade credit * inventory policy * working capital * default penalty, 1. Introduction Trade credit finance is an important inventory financing tool for firms. Particularly for small or newly established firms, which usually face expensive bank loans, trade credit is the [...]
- Published
- 2019
- Full Text
- View/download PDF
40. A Pilgrim Scheduling Approach to Increase Safety During the Hajj
- Author
-
Haase, Knut, Kasper, Mathias, Koch, Matthes, and Muller, Sven
- Subjects
Hajj -- Safety and security measures ,Spectator control -- Planning ,Pilgrims and pilgrimages -- Planning -- Safety and security measures ,Hajj -- Safety and security measures ,Scheduling (Management) -- Methods ,Company business planning ,Business ,Mathematics - Abstract
The Hajj--the great pilgrimage to Mecca, Saudi Arabia--is one of the five pillars of Islam. Up to four million pilgrims perform the Hajj rituals every year. This makes it one of the largest pedestrian problems in the world. Ramy al-Jamarat--the symbolic stoning of the devil--is known to be a particularly crowded ritual. Up until 2006, it was repeatedly overshadowed by severe crowd disasters. To avoid such disasters, Saudi authorities initiated a comprehensive crowd management program. A novel contribution to these efforts was the development of an optimized schedule for the pilgrims performing the stoning ritual. A pilgrim schedule prescribes specific routes and time slots for all registered pilgrim groups. Together, the assigned routes strictly enforce one-way flows toward and from the ritual site. In this paper, we introduce a model and a solution approach to the Pilgrim Scheduling Problem. Our multistage procedure first spatially smooths the utilization of infrastructure capacity to avoid dangerous pedestrian densities in the network. In the next optimization step, it minimizes overall dissatisfaction with the scheduled time slots. We solve the Pilgrim Scheduling Problem by a fix-and-optimize heuristic, and subsequently simulate the results to identify necessary modifications of the scheduling constraints. Our numerical study shows that the approach solves instances with more than 2.3 million variables in less than 10 minutes on average. At the same time, the gap between optimal solution and upper bound never exceeds 0.28%. The scheduling approach was an integral part of the Hajj planning process in 2007-2014 and 2016-2017. No crowd disaster occurred in these years. Our approach was not applied in 2015, when a severe crowd crush happened close to the ritual site. We briefly discuss possible causes and consequences of this accident. Open Access Statement: This work is licensed under a Creative Commons Attribution 4.0 International License. You are free to copy, distribute, transmit and adapt this work, but you must attribute this work as 'Operations Research. Copyright [c] 2019 The Author(s). https://doi.org/10.1287/opre.2018.1798, used under a Creative Commons Attribution License: https://creativecommons.Org/licenses/by/4.0/.' Funding: This work was partially supported by the Ministry of Municipal and Rural Affairs of the Kingdom of Saudi Arabia. Keywords: Hajj * crowd management * scheduling * timetabling * pedestrian simulation * crowd disaster * crowd dynamics * pilgrimage, 1. Introduction The number and severity of deadly crowd disasters at public events has risen significantly over the past decades (Helbing et al. 2015). A frequent cause of such disasters [...]
- Published
- 2019
- Full Text
- View/download PDF
41. Designing Dynamic Contests
- Author
-
Bimpikis, Kostas, Ehsani, Shayan, and Mostagir, Mohamed
- Subjects
Technological innovations -- Planning ,Disclosure of information -- Planning ,Contests -- Planning ,Company business planning ,Business ,Mathematics - Abstract
Participants race toward completing an innovation project and learn about its feasibility from their own efforts and their competitors' gradual progress. Information about the status of competition can alleviate some of the uncertainty inherent in the contest, but it can also adversely affect effort provision from the laggards. This paper explores the problem of designing the award structure of a contest and its information disclosure policy in a dynamic framework and provides a number of guidelines for maximizing the designer's expected payoff. In particular, we show that the probability of obtaining the innovation as well as the time it takes to complete the project are largely affected by when and what information the designer chooses to disclose. Furthermore, we establish that intermediate awards may be used by the designer to appropriately disseminate information about the status of competition. Interestingly, our proposed design matches several features observed in real-world innovation contests. Supplemental Material: The e-companion is available at https://doi.org/10.1287/opre.2018.1823. Keywords: contests * learning * dynamic competition * open innovation * information, 1. Introduction Innovation contests are a tool that firms and institutions use to outsource innovation to the crowd. An open call is placed for an innovation project that participants compete [...]
- Published
- 2019
- Full Text
- View/download PDF
42. Analysis of Markov Chain Approximation for Option Pricing and Hedging: Grid Design and Convergence Behavior
- Author
-
Zhang, Gongqiu and Li, Lingfei
- Subjects
Options (Finance) -- Analysis -- Models ,Hedging (Finance) -- Analysis -- Models -- Methods ,Pricing -- Analysis -- Models ,Capital assets pricing model -- Methods ,Markov processes -- Analysis -- Models -- Methods ,Product price ,Business ,Mathematics - Abstract
Continuous time Markov chain (CTMC) approximation is an intuitive and powerful method for pricing options in general Markovian models. This paper analyzes how grid design affects the convergence behavior of barrier and European options in general diffusion models. Using the spectral method, we obtain sharp estimates for the convergence rate of option price for nonuniform grids. We propose to calculate an option's delta and gamma by taking central difference of option prices on the grid. For this simple method, we prove that, surprisingly, delta and gamma converge at the same rate as option price does. Our analysis allows us to develop principles that are sufficient and necessary for designing nonuniform grids that can achieve second-order convergence for option price, delta, and gamma. Based on these principles, we propose a novel class of nonuniform grids that ensure that convergence is not only second order but also, smooth. This further allows extrapolation to be applied to achieve even higher convergence rate. Our grids enable the CTMC approximation method to price and hedge a large number of options with different strikes fast and accurately. Applicability of our results to jump models is discussed through numerical examples. Funding: L. Li was supported by the Hong Kong Research Grant Council General Research Fund [Grants 14202117 and 14205816]; G. Zhang was supported by the National Natural Science Foundation of China [Grant 11801423]. Supplemental Material: The e-companion is available at https://doi.org/10.1287/opre.2018.1791. Keywords: diffusions * jumps * Markov chain * nonuniform grids * convergence rate * smooth convergence * extrapolation * spectral representation * nonsmooth payoffs, 1. Introduction Asset prices are often assumed to follow a continuous time Markov process with continuous-state space in financial engineering. However, only a limited number of models exist (Kou 2002, [...]
- Published
- 2019
- Full Text
- View/download PDF
43. Load Balancing in the Nondegenerate Slowdown Regime
- Author
-
Gupta, Varun and Walton, Neil
- Subjects
History of computing ,Business ,Mathematics - Abstract
We analyze join-the-shortest-queue 0SQ) in a contemporary scaling regime known as the nondegenerate slowdown (NDS) regime. Join-the-shortest-queue is a classical load-balancing policy for queueing systems with multiple parallel servers. Parallel server queueing systems are regularly analyzed and dimensioned by diffusion approximations achieved in the Halfin--Whitt scaling regime. However, when jobs must be dispatched to a server upon arrival, we advocate the nondegenerate slowdown regime to compare different load-balancing rules. In this paper we identify novel diffusion approximation and timescale separation that provides insights into the performance of JSQ. We calculate the price of irrevocably dispatching jobs to servers and prove this to be within 15% (in the NDS regime) of the rules that may maneuver jobs between servers. We also compare our results for the JSQ policy with the NDS approximations of many modern load-balancing policies such as idle-queue-first and power-of-d-choices policies that act as low information proxies for the JSQ policy. Our analysis leads us to construct new rules that have identical performance to JSQ but require less communication overhead than power of two choices. Funding: V. Gupta gratefully acknowledges financial support from the University of Chicago Booth School of Business. Supplemental Material: The online appendix is available at http://doi.org/10.1287/opre.2018.1768. Keywords: nondegenerate slowdown * load balancing * join-the-shortest-queue * M/M/k, 1. Introduction We investigate a parallel server queueing system using the join-the-shortest-queue (JSQ) dispatch rule. JSQ is a simple, classical, and widely deployed load-balancing policy in operations research. It remains [...]
- Published
- 2019
- Full Text
- View/download PDF
44. A Unifying Framework for Farrell Profit Efficiency Measurement
- Author
-
Fare, Rolf, He, Xinju, Li, Sungko, and Zelenyuk, Valentin
- Subjects
Business ,Mathematics - Abstract
Measuring profit efficiency is a challenging task, and many different approaches have been suggested. This paper synthesizes existing approaches and develops a general Farrell-type approach of the profit efficiency measurement. Our derivations unveil new and useful relationships between existing measures and the proposed new Farrell-type measures. In addition, this helps us establish a generalized and unifying framework for studying efficiency behavior of firms, where the profit efficiency measure satisfies desirable properties and contains the conventional Farrell measures of technical efficiency and allocative efficiency as multiplicative components. A new component of the decomposition of profit efficiency, which we call the revenue-efficient allocative efficiency measure, is introduced to identify the necessary changes of input scale and input mix in addition to output changes. The proposed new approach also encompasses and is coherent with profit-maximizing behavior as a benchmark case. Funding: The authors acknowledge the financial support provided by their institutions and partial supports from the Australian Research Council [Grants ARC DP130101022 and ARC FT170100401] and the Research Committee's Matching Scheme for Inviting High Quality Sabbatical Visitors/Distinguished Scholars of Hong Kong Baptist University. Keywords: profit efficiency * Farrell-type measures * revenue-efficient allocative efficiency * invariance property of allocative efficiency measure, 1. Introduction Two types of scaling, inputs and outputs, are frequently used in measuring the technical efficiency of production: (i) scaling each element of a vector by a potentially distinct [...]
- Published
- 2019
- Full Text
- View/download PDF
45. Sequential Capacity Expansion Options
- Author
-
Bensoussan, Alain and Chevalier-Roignant, Benoit
- Subjects
Uncertainty -- Analysis ,Investments -- Analysis ,Business ,Mathematics - Abstract
This paper considers a firm's capacity expansion decisions under uncertainty. The firm has leeway in timing investments and in choosing how much capacity to install at each investment time. We model this problem as the sequential exercising of compound capacity expansion options with embedded optimal capacity choices. We employ the impulse control methodology and obtain a quasi-variational inequality that involves two state variables: an exogenous, stochastic price process and a controlled capacity process (without a diffusion term). We provide a general verification theorem and identify--and prove the optimality of--a two-dimensional (s, S)-type policy for a specific (admittedly restrictive) choice of the model parameters and of the running profit. The firm delays investment in capacity to ensure that the perpetuity value of newly installed capacity exceeds the total opportunity cost, including the fixed cost component, by a sufficient margin. Our general model for 'the option to expand' transcends a single-option exercise and yields predictions of both the optimal investment timing and the optimal scale of production. Funding: Research was supported by the National Science Foundation, Division of Mathematical Sciences [Grants DMS-1303775 and DMS-1612880] and the Research Grants Council of the Hong Kong Special Administrative Region [CityU-500113 and CityU-11303316]. Key words: investment under uncertainty * hysteresis * capacity investment * real options, 1. Introduction Capacity investment decisions under uncertainty are among the most challenging problems faced by firms. Investing in many industries (e.g., oil and gas) involves committing substantial financial resources in [...]
- Published
- 2019
- Full Text
- View/download PDF
46. Online Vehicle Routing: The Edge of Optimization in Large-Scale Applications
- Author
-
Bertsimas, Dimitris, Jaillet, Patrick, and Martin, Sebastien
- Subjects
Traveling-salesman problem -- Analysis ,Business ,Mathematics - Abstract
With the emergence of ride-sharing companies that offer transportation on demand at a large scale and the increasing availability of corresponding demand data sets, new challenges arise to develop routing optimization algorithms that can solve massive problems in real time. In this paper, we develop an optimization framework, coupled with a novel and generalizable backbone algorithm, that allows us to dispatch in real time thousands of taxis serving more than 25,000 customers per hour. We provide evidence from historical simulations using New York City routing network and yellow cab data to show that our algorithms improve upon the performance of existing heuristics in such real-world settings. Funding: Research funded in part by the Office of Naval Research [Grants N00014-12-1-0999, N00014-15-1-2083, and N00014-16-1-2786]. Supplemental Material: The online supplement is available at https://doi.org/10.1287/opre.2018.1763. Keywords: online vehicle routing * taxis * mixed-integer optimization * simulation * large scale, 1. Introduction Urban transportation is going through a rapid and significant evolution. In the recent past, the emergence of the Internet and of smartphone technologies has made us increasingly connected, [...]
- Published
- 2019
- Full Text
- View/download PDF
47. The Humanitarian Pickup and Distribution Problem
- Author
-
Eisenhandler, Ohad and Tzur, Michal
- Subjects
Resource allocation -- Analysis ,Humanitarian aid -- Analysis ,Food banks -- Services ,Business ,Mathematics - Abstract
Food rescue--the collection of perishable products from food suppliers who are willing to make donations, and their distribution to welfare agencies that serve individuals in need--has become increasingly widespread in recent years. This phenomenon is a result of economic crises, but it is also encouraged by the tax and good image it provides to donor companies. The problem we study in this paper focuses on the logistic challenges of a food bank that on a daily basis uses vehicles of limited capacity to distribute food collected from suppliers in the food industry to welfare agencies, under an imposed maximal traveling time. We model this problem as a routing resource allocation problem, with the aim of maintaining equitable allocations to the different agencies while delivering overall as much food as possible. We introduce an innovative objective function that satisfies desired properties of the allocation, that is easy to compute and implement within a mathematical formulation, and that balances effectiveness and equity acceptably. We present both an exact solution method and a heuristic approach, based on the large neighborhood search framework, which relies on the fact that a certain subproblem is easy to solve. Numerical experiments on several real-life and randomly generated data sets confirm that high-quality solutions may be obtained. Funding: This research was supported by the Manna Center Program for Food Safety and Security at Tel Aviv University and by the Israel Science Foundation [Grant 463/15]. Supplemental Material: The online appendices are available at https://doi.org/10.1287/opre.2018.1751. Keywords: humanitarian logistics * food banks * resource allocation * vehicle routing, 1. Introduction Hunger is one of the most difficult humanitarian issues to cope with and is still prevalent even in developed countries. The latest assessments from the Global Nutrition Report [...]
- Published
- 2019
- Full Text
- View/download PDF
48. Ten years of the OR Practice section
- Author
-
Rothkopf, Michael H.
- Subjects
Operations research -- Bibliography ,Business ,Mathematics - Abstract
The OR Practice Section of the Operations Research journal is celebrating its tenth anniversary in 1994. While the publication is dominated by theoretical papers, the section served to provide a forum for papers that are based on actual application of operations research. Although editors of the section did not limit the mathematical or theoretical content of published papers, they nevertheless required that the style of writing be easy to understand to practitioner readers. Since its inception, 68 papers have been accepted for publication. Fifty-seven of these are accounts of actual applications while 11 are surveys or studies of practice. Despite the application slant of the section, the authors are predominantly academicians. The subject areas of the paper are usually applications in energy, manufacturing and civil government. Client institutions of the 57 application papers are mostly unregulated private organizations.
- Published
- 1994
49. Fear of the Market or Fear of the Competitor? Ambiguity in a Real Options Game
- Author
-
Hellmann, Tobias and Thijssen, Jacco J.J.
- Subjects
Markets (Economics) -- Analysis ,Decision-making -- Analysis ,Demand (Economics) -- Analysis ,Business ,Mathematics - Abstract
In this paper, we study an investment game between two firms with a first-mover advantage, where payoffs are driven by a geometric Brownian motion. At least one of the firms is assumed to be ambiguous over the drift, with maxmin preferences over a strongly rectangular set of priors. We develop a strategy and equilibrium concept allowing for ambiguity and show that equilibria can be preemptive (a firm invests at a point where investment is Pareto dominated by waiting) or sequential (one firm invests as if it were the exogenously appointed leader). Following the standard literature, the worst-case prior for an ambiguous firm in the follower role is obtained by setting the lowest possible trend in the set of priors. However, if an ambiguous firm is the first mover, then the worst-case drift can fluctuate between the lowest and the highest trends. This novel result shows that 'worst-case prior' in a setting with drift ambiguity does not always equate to 'lowest trend.' As a consequence, preemptive pressure reduces. We show that this results in a possibly increasing level of ambiguity in firm value. If only one firm is ambiguous, then the value of the nonambiguous firm can be increasing in the level of ambiguity of the ambiguous firm. Keywords: real options * Knightian uncertainty * worst-case prior * optimal stopping * timing game, 1. Introduction Many, if not most, investment decisions taken by firms are characterized by substantial upfront sunk costs, (partial) irreversibility, and uncertainty over future cash flows (cf. Dixit and Pindyck [...]
- Published
- 2018
- Full Text
- View/download PDF
50. Online First-Order Framework for Robust Convex Optimization
- Subjects
Mathematical optimization -- Evaluation ,Business ,Mathematics - Abstract
Robust optimization (RO) has emerged as one of the leading paradigms to efficiently model parameter uncertainty. The recent connections between RO and problems in statistics and machine learning domains demand for solving RO problems in ever larger scales. However, the traditional approaches for solving RO formulations based on building and solving robust counterparts or the iterative approaches utilizing nominal feasibility oracles can be prohibitively expensive and thus significantly hinder the scalability of the RO paradigm. In this paper, we present a general and flexible iterative framework to approximately solve robust convex optimization problems that is built on a fully online first-order paradigm. In comparison with the existing literature, a key distinguishing feature of our approach is that it requires access to only first-order oracles that are remarkably cheaper than pessimization or nominal feasibility oracles, while maintaining the same convergence rates. This, in particular, makes our approach much more scalable and hence preferable in large-scale applications, specifically those from machine learning and statistics domains. We also provide new interpretations of existing iterative approaches in our framework and illustrate our framework on robust quadratic programming. Keywords: robust optimization * online convex optimization * first-order methods * saddle point problems, 1. Introduction Robust optimization (RO) is one of the leading modeling paradigms for optimization problems under uncertainty. As opposed to the other approaches, RO seeks a solution that is immunized [...]
- Published
- 2018
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.