2,657 results
Search Results
2. Order allocation for stock cutting in the paper industry
- Author
-
Menon, Syam and Schrage, Linus
- Subjects
Paper industry -- Research -- Analysis ,Management science -- Analysis -- Research ,Logistics -- Research -- Analysis ,Business ,Mathematics ,Analysis ,Research - Abstract
A common problem encountered in paper-production facilities is that of allocating customer orders to machines so as to minimize the total cost of production. It can be formulated as a [...]
- Published
- 2002
3. Analysis of distribution strategies in the industrial paper and plastics industry
- Author
-
Cohen, Morris A., Agrawal, Narenda, Agrawal, Vipul, and Raman, Ananth
- Subjects
Paper industry -- Distribution ,Plastics industry -- Distribution ,Distributors (Commerce) -- Research ,Distribution of goods -- Research ,Business ,Mathematics - Abstract
A study is conducted to examine the costs, benefits and strategic value of the redistribution function in the industrial paper and plastics industry. Specifically, the study seeks to find out what the value-added of the redistribution channel is, how distributors should source their products, how manufacturing companies can use their channel management policies to boost their profits, and how inventory and stock control policies impact the aforementioned issues. Results offer an economic justification for the role of redistributor in the industrial paper and plastics industry. The existence of middlemen in the distribution system is found to be advantageous for all players in the industry. They boost the inventory turns of distributors and allow manufacturers to cater to small distributors. Without redistributors, the costs of distributors would soar.
- Published
- 1995
4. Understanding linear programming modeling through an examination of the early papers on model formulation
- Author
-
Murphy, Frederic H. and Panchanadam, Venkat
- Subjects
Linear programming -- Analysis ,Cognitive psychology -- Analysis ,Business ,Mathematics - Abstract
Literature in cognitive psychology is used to analyze the thought processes behind the development of linear programming (LP) models. Emphasis is on understanding how the authors organize problems and models in their work. A kind of cognitive history of the early years of LP model building is thus drawn. Three hypotheses are investigated, namely, an organization of problems into categories by kinds of transformations, organization by model type and organization of problems and models by industry. Analysis of historical processes confirms that there is transfer and an underlying organization and that there are multiple dimensions forming relationships among problems and models. The three hypotheses forwarded are confirmed to varying degrees.
- Published
- 1997
5. Research strategies used by OR/MS workers as shown by an analysis of papers in flagship journals
- Author
-
Reisman, Arnold and Kirschnick, Frank
- Subjects
Operations research -- Methods ,Management science -- Research ,Business ,Mathematics - Abstract
Seven process categories of research strategies used by operations research (OR) and management science (MS) workers are identified. These strategies are ripple, embedding, bridging, structuring, creative application, statistical modeling and transfer of technology. A sample of articles appearing in the 1992 issues of the flagship journals Operations Research, Management Science and Interfaces is analyzed to determine which of the aforementioned strategies are commonly used by OR/MS researchers. The study also aims to link these research methodologies to the types of research work found at the extremes of Reisman and Kirschnick's (1994) six-point scale for categorizing OR/MS work. The results show that the ripple process is most often applied in theoretical research, while the transfer-of-technology process is most used in true applications.
- Published
- 1995
6. The devolution of OR/MS: implications from a statistical content analysis of papers in flagship journals
- Author
-
Reisman, Arnold and Kirschnick, Frank
- Subjects
Operations research -- Analysis ,Management science -- Analysis ,Business ,Mathematics - Abstract
Some operations research (OR) practitioners have complained about the field's 'devolution' or 'regression.' In the past, the science was market-oriented in that researchers sought solution to real-world problems. However, current OR is input-oriented in that mathematicians search for problems that fit their models. A detailed survey of OR and management science publications is conducted to ascertain the existence of this trend. In particular, the amount of space allocated to theory relative to applications is analyzed for such flagship journals as Operations Research, Management Science and Interfaces, using a five-point classification scale. Comparison of the 1962 and 1992 editions of the first two journals, and the 1972 and 1992 editions of the last, reveal that usage of the terms 'application' and 'data' differ in degree and kind, respectively. The results suggest that OR practitioners need to reassess the profession and chart its future course.
- Published
- 1994
7. Correction to the paper 'optimal product launch times in a duopoly: balancing life-cycle revenues with product cost'
- Author
-
Guseo, Renato and Mortarino, Cinzia
- Subjects
Business ,Mathematics - Abstract
The aim of this note is to correct an error in the formulation of Theorem 1 by Savin and Terwiesch [Savin, S., C. Terwiesch. 2005. Optimal product launch times in [...]
- Published
- 2010
8. Comments on the paper: 'Heuristic and Special Case Algorithms for Dispersion Problems' by S.S. Ravi, D.J. Rosenkrantz, and G.K. Tayi
- Author
-
Tamir, Arie
- Subjects
Algorithms -- Analysis ,Heuristic programming -- Analysis ,Business ,Mathematics - Abstract
Some of the views of S.S. Ravi, D.J. Rosenkrantz and G.K. Tayi regarding algorithms for facility dispersion models are flawed. They provided a Max-Min Facility Dispersion (MMFD) with a performance guarantee of 50% and a Max-Average Facility Dispersion (MAFD) with a 25% performance rate. The heuristic of the MMFD was depicted as more of a general model than what was analyzed in 1991. They also supported the MMFD with the lowest complexity bound in a study conducted by D.W. Wang and Y.S. Kuo in 1988. Moreover, the MAFD was associated with a dynamic programming algorithm considered as a trivial optimal solution. The extension of the one-dimensional version of MAFD to tree networks can also be solved through an approach made in 1991.
- Published
- 1998
9. Early Integer Programming
- Author
-
Gomory, Ralph E.
- Published
- 2002
10. An Object-Oriented Random-Number Package with Many Long Streams and Substreams
- Author
-
L'Ecuyer, Pierre, Simard, Richard, Chen, E. Jack, and Kelton, W. David
- Published
- 2002
11. The Greedy Procedure for Resource Allocation Problems: Necessary and Sufficient Conditions for Optimality
- Author
-
Federgruen, Awi and Groenevelt, Henri
- Published
- 1986
12. George B. Dantzig: Operations Research Icon
- Author
-
Cottle, Richard W.
- Published
- 2005
- Full Text
- View/download PDF
13. A dynamic stochastic stock-cutting problem
- Author
-
Krichagina, Elena V., Rubio, Rodrigo, Taksar, Michael I., and Wein, Lawrence M.
- Subjects
Strathmore Paper Co. -- Production management -- 00097709 ,Inventory control -- Models ,Production control -- Models ,Scheduling (Management) -- Models ,Linear programming -- Usage ,Brownian motion -- Usage ,Stochastic programming -- Analysis ,Paper industry -- Production management ,Business ,Mathematics - Abstract
A stock cutting problem was examined in one of the facilities of Strathmore Paper Co, a paper plant that makes different-sized sheets for a finished goods inventory that meets random customer demand. In the factory, the controller makes the decision when to shut down and restart the paper machine, and how to cut finished paper rolls into sheets of paper. A procedure, which involves linear programming and Brownian control, that generates an effective but suboptimal solution was devised. The methodology was found to be easy to use and and simplified the production process since the linear program significantly limits the number of cutting configurations that can be utilized in the Brownian analysis. It performed better than other policies using a larger number of cutting configurations.
- Published
- 1998
14. Comments on the paper: `Heuristic and Special Case Algorithms for Dispersion Problems' by S. S...
- Author
-
Tamir, Are
- Subjects
HEURISTIC ,MATHEMATICAL optimization ,ALGORITHMS ,MATHEMATICAL statistics ,MATHEMATICS ,STATISTICS ,PROBABILITY theory - Abstract
This article presents comments by the author on a paper discussing heuristic and special case algorithms for dispersion problems. Problems discussed in this article are the subject of the paper in focus. The results presented in that paper include a simple heuristic for Max-Min Facility Dispersion (MMFD) which provides a performance guarantee of 1/2, and a similar heuristic for Max-Avg Facility Dispersion (MAFD) with a performance guarantee of 1/4. It is also proved there that obtaining a performance guarantee of more than 1/2 for MMFD is NP-hard. The paper also discussed the one- dimensional versions of MMFD and MAFD, where the vertex set V consists of a set of n points on the real line. Section 2 of the paper is devoted to the heuristic for MMPD. It should be noted that this heuristic was analyzed before. In fact, it is shown therein that the performance guarantee of 1/2, holds for a more general model where the selected set p is restricted to be in a compact subset of the network indiced by the edge distances.
- Published
- 1998
- Full Text
- View/download PDF
15. BPSS: a scheduling support system for the packaging industry
- Author
-
Adler, Leonard, Fraiman, Nelson, Kobacker, Edward, Pinedo, Michael, Plotnicoff, Juan Carlos, and Tso Pang Wu
- Subjects
Packaging industry -- Logistics ,Paper products industry -- Production management ,Production management -- Models ,Scheduling (Management) -- Models ,Business ,Mathematics - Abstract
The architecture of a scheduling support system which utilizes a five-step algorithm that accounts for all jobs at different stages is discussed. The scheduling framework incorporates priority rules designed for a flowshop where at least one phase constitutes a bottleneck. The algorithm has become a composite part of the Bagpak Production Scheduling System (BPSS), which has been in use in two packaging factories. This system includes a data base management and user interface modules. The machine set-up in these plants approximate a flexible flowshop with the stages in series are parallel to the machines at each stage of the production process. The jobs are well-defined in terms of shipping dates and priorities as well as processing and setup times. Schedulers in the two factories using BPSS confirm the system's high degree of usability and accuracy.
- Published
- 1993
16. Rendezvous on a Planar Lattice
- Author
-
Steve Alpern and V. J. Baston
- Subjects
Discrete mathematics ,Goto ,Horizontal and vertical ,business.industry ,Integer lattice ,Rendezvous ,Graph paper ,Management Science and Operations Research ,Computer Science Applications ,T Technology (General) ,Planar ,restrict ,Lattice (order) ,Artificial intelligence ,business ,Mathematics - Abstract
We analyze the optimal behavior of two players who are lost on a planar surface and who want to meet each other in least expected time. They each know the initial distribution of the other’s location, but have no common labeling of points, and so cannot simply go to a location agreed to in advance. They have no compasses, so do not even have a common notion of North. For simplicity, we restrict their motions to the integer lattice Z2 (graph paper) and their motions to horizontal and vertical directions, as in the original work of Anderson and Fekete (2001).
- Published
- 2005
17. USING RANKING AND SELECTION TO "CLEAN UP" AFTER SIMULATION OPTIMIZATION.
- Author
-
Boesel, Justin, Nelson, Barry L., and Seong-Hee Kim
- Subjects
HEURISTIC ,MATHEMATICAL statistics ,MATHEMATICS ,PROBABILITY theory ,OPERATIONS research ,MATHEMATICAL optimization - Abstract
In this paper we address the problem of finding the simulated system with the best (maximum or minimum) expected performance when the number of systems is large and initial samples from each system have already been taken. This problem may be encountered when a heuristic search procedure--perhaps one originally designed for use in a deterministic environment --has been applied in a simulation-optimization context. Because of stochastic variation, the system with the best sample mean at the end of the search procedure may not coincide with the true best system encountered during the search. This paper develops statistical procedures that return the best system encountered by the search (or one near the best) with a prespecified probability. We approach this problem using combinations of statistical subset selection and indifference-zone ranking procedures. The subset-selection procedures, which use only the data already collected, screen out the obviously inferior systems, while the indifference-zone procedures, which require additional simulation effort, distinguish the best from the less obviously inferior systems. [ABSTRACT FROM AUTHOR]
- Published
- 2003
- Full Text
- View/download PDF
18. An Annotated Bibliography of Decision Analytic ADDlications to Health Care.
- Author
-
Krischer, Jeffrey P.
- Subjects
DECISION making ,MEDICAL care ,HEALTH policy ,PERIODICALS ,PROBABILITY theory ,MATHEMATICS - Abstract
This paper describes 110 applications of decision analysis to health care. Each paper is characterized according to the particular problem it addresses and the methods employed in the application. These applications span 15 years of study and are reported in a widely dispersed literature. Nearly half of the published articles appear in journals with a medical audience and more than 25% of the studies remain unpublished. The major areas of application identified in this review have been the evaluation of alternatives in treatment and health policy planning. Studies discussing conceptual issues in the application of decision analysis represent a substantial portion of those identified. Almost equal numbers of applications involve the use of single and multiattribute utilities in scaling decision outcomes and relatively few apply to group utilities. General discussions of decision analysis methods and applications focused on probability assessments/analyses represent the other major categories of studies cited. [ABSTRACT FROM AUTHOR]
- Published
- 1980
- Full Text
- View/download PDF
19. Rank Centrality: Ranking from Pairwise Comparisons.
- Author
-
Negahban, Sahand, Oh, Sewoong, and Shah, Devavrat
- Subjects
MULTINOMIAL distribution ,AGGREGATION (Statistics) ,ALGORITHMS ,DISTRIBUTION (Probability theory) ,MATHEMATICS - Abstract
The question of aggregating pairwise comparisons to obtain a global ranking over a collection of objects has been of interest for a very long time: be it ranking of online gamers (e.g., MSR's TrueSkill system) and chess players, aggregating social opinions, or deciding which product to sell based on transactions. In most settings, in addition to obtaining a ranking, finding 'scores' for each object (e.g., player's rating) is of interest for understanding the intensity of the preferences. In this paper, we propose Rank Centrality, an iterative rank aggregation algorithm for discovering scores for objects (or items) from pairwise comparisons. The algorithm has a natural random walk interpretation over the graph of objects with an edge present between a pair of objects if they are compared; the score, which we call Rank Centrality, of an object turns out to be its stationary probability under this random walk. To study the efficacy of the algorithm, we consider the popular Bradley-Terry-Luce (BTL) model (equivalent to the Multinomial Logit (MNL) for pairwise comparisons) in which each object has an associated score that determines the probabilistic outcomes of pairwise comparisons between objects. In terms of the pairwise marginal probabilities, which is the main subject of this paper, the MNL model and the BTL model are identical. We bound the finite sample error rates between the scores assumed by the BTL model and those estimated by our algorithm. In particular, the number of samples required to learn the score well with high probability depends on the structure of the comparison graph. When the Laplacian of the comparison graph has a strictly positive spectral gap, e.g., each item is compared to a subset of randomly chosen items, this leads to dependence on the number of samples that is nearly order optimal. Experimental evaluations on synthetic data sets generated according to the BTL model show that our algorithm performs as well as the maximum likelihood estimator for that model and outperforms other popular ranking algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
20. AN INVERSE PROBLEM OF THE LANCHESTER SQUARE LAW IN ESTIMATING TIME-DEPENDENT ATTRITION COEFFICIENTS.
- Author
-
Chen, Hsi-Mei
- Subjects
MILITARY science ,LINEAR time invariant systems ,LINEAR systems ,DISCRETE-time systems ,OPERATIONS research ,MATHEMATICS - Abstract
This paper considers the inverse problem of estimating time-varying attrition coefficients in Lanchester's square law with reinforcements, using observed data on some or all of the battle's strength histories and the reinforcement schedules. The method employed is a nonparametric extension of the parametric conjugate gradient method (P-CGM). We use hypothetical strength histories and reinforcement schedules that are known to be without error at several points in time to illustrate the method. However, the method has application in other circumstances. The problem of estimating the time-dependent attrition coefficients that best fit a set of given strength histories is inherently a nonparametric inverse problem. In this paper we cast it into a nonlinear optimization problem, and show how to solve it numerically by using a nonparametric conjugate gradient method (NP-CGM). Two numerical test cases are provided to illustrate the application of method. [ABSTRACT FROM AUTHOR]
- Published
- 2002
- Full Text
- View/download PDF
21. A METHOD TO CALCULATE STEADY-STATE DISTRIBUTIONS OF LARGE MARKOV CHAINS.
- Author
-
Feinberg, Brion N. and Chiu, Samuel S.
- Subjects
ITERATIVE methods (Mathematics) ,NUMERICAL analysis ,ALGORITHMS ,MARKOV processes ,MATHEMATICS ,STOCHASTIC processes - Abstract
This paper develops an efficient iterative algorithm to calculate the steady-state distribution of nearly all irreducible discrete-time Markov chains. Computational experiences suggest that, for large Markovian systems (more than 130 states), the proposed algorithm can be ten times faster than standard Gaussian elimination in finding solutions to an accuracy of 0.1%. The proposed algorithm is developed in three stages. First, we develop a very efficient algorithm for determining steady-state distributions of a restricted class of Markovian systems. A second result establishes a relationship between a general irreducible Markovian system and a system in the restricted class of Markovian systems. Finally, we combine the two results to produce an efficient, iterative algorithm to solve Markov systems. The paper concludes with a discussion of the observed performance of the algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 1987
- Full Text
- View/download PDF
22. Consistency of Multidimensional Convex Regression.
- Author
-
Lim, Eunji and Glynn, Peter W.
- Subjects
REGRESSION analysis ,MULTIVARIATE analysis ,CONVEX functions ,MATHEMATICAL variables ,MATHEMATICS ,ASYMPTOTIC expansions - Abstract
Convex regression is concerned with computing the best fit of a convex function to a data set of n observations in which the independent variable is (possibly) multidimensional. Such regression problems arise in operations research, economics, and other disciplines in which imposing a convexity constraint on the regression function is natural. This paper studies a least- squares estimator that is computable as the solution of a quadratic program and establishes that it converges almost surely to the "true" function as n → ∞ under modest technical assumptions. In addition to this multidimensional consistency result, we identify the behavior of the estimator when the model is misspecified (so that the "true" function is nonconvex), and we extend the consistency result to settings in which the function must be both convex and nondecreasing (as is needed for consumer preference utility functions). [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
23. Finite Disjunctive Programming Characterizations for General Mixed-Integer Linear Programs.
- Author
-
Binyuan Chen, Küçükyavuz, Simge, and Sen, Suvrajeet
- Subjects
INTEGER programming ,MATHEMATICAL optimization ,ALGORITHMS ,MATHEMATICAL programming ,MATHEMATICAL variables ,MATHEMATICS - Abstract
In this paper, we give a finite disjunctive programming procedure to obtain the convex hull of general mixed-integer linear programs (MILP) with bounded integer variables. We propose a finitely convergent convex hull tree algorithm that constructs a linear program that has the same optimal solution as the associated MILP. In addition, we combine the standard notion of sequential cutting planes with ideas underlying the convex hull tree algorithm to help guide the choice of disjunctions to use within a cutting plane method. This algorithm, which we refer to as the cutting plane tree algorithm, is shown to converge to an integral optimal solution in finitely many iterations. Finally, we illustrate the proposed algorithm on three well-known examples in the literature that require an infinite number of elementary or split disjunctions in a rudimentary cutting plane algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
24. CONSTANT EXCHANGE RISK PROPERTIES.
- Author
-
Farquhar, Peter H. and Nakamura, Yutaka
- Subjects
DECISION making ,METHODOLOGY ,UTILITY functions ,PROBLEM solving ,MATHEMATICS ,UNCERTAINTY - Abstract
This paper develops a methodology using risk properties to characterize the functional form of a utility measure for decision making under uncertainty. The constant absolute risk property, for example, is known to be necessary and sufficient, with appropriate regularity conditions, for the utility function to have either a linear or an exponential form. A new generalization of this property, called the constant exchange risk property, gives a characterization of six utility functions: the linear function, the exponential function, the quadratic function, the sum of two exponential functions, the sum of a linear and an exponential function, and the product of a linear and an exponential function. Since all of these functional forms have been used previously as approximations, this methodology allows analysts to distinguish beforehand between alternative forms and thus properly specify the utility function in applications. [ABSTRACT FROM AUTHOR]
- Published
- 1987
- Full Text
- View/download PDF
25. The Centers and Medians of a Graph.
- Author
-
Minieka, Edward
- Subjects
GRAPHIC methods ,MATHEMATICS ,MEDIAN (Mathematics) ,STANDARD deviations ,ARITHMETIC mean ,GRAPH theory ,STATISTICS - Abstract
This paper extends previous results for calculating the centers and medians of a graph so that every point on every edge as well as all vertices are served. [ABSTRACT FROM AUTHOR]
- Published
- 1977
- Full Text
- View/download PDF
26. Measures of Effectiveness for Crime Reduction Programs.
- Author
-
Maltz, Michael D.
- Subjects
CRIMINAL justice system ,CRIME prevention ,POLICE ,MATHEMATICS ,ARREST ,AGENCY (Law) ,REACTION time ,DISCRIMINATION (Sociology) - Abstract
This paper describes some measures commonly used to evaluate anticrime programs and proposes directions for research on improved measures. Since the police are usually seen as the main crime control agency, the paper first discusses the differences between evaluating the police and evaluating crime control programs. Five measures used to evaluate such programs are then analyzed: crime rate, clearance rate, arrest rate, police response time, and crime seriousness index. The last measure suggests the direction for the development of an improved measure of crime: the development of a more complete taxonomy of crime and the discrimination between and explication of the different types of harm caused by crime. [ABSTRACT FROM AUTHOR]
- Published
- 1975
- Full Text
- View/download PDF
27. A Comparative Study of Flow-Shop Algorithms.
- Author
-
Baker, Kenneth R.
- Subjects
ALGORITHMS ,OPERATIONS research ,INDUSTRIAL engineering ,ALGEBRA ,ARITHMETIC ,MATHEMATICS - Abstract
This paper describes an experimental comparison of flow-shop algorithms, motivated by the need to consolidate recent research on this topic. Using a set of test problems, it investigated various branch-and-bound and elimination strategies in a comparative study and then combined them to produce a new and efficient solution algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 1975
- Full Text
- View/download PDF
28. Planning for End-User Substitution in Agribusiness.
- Author
-
Bansal, Saurabh and Dyer, James S.
- Subjects
COMMERCIAL markets ,AGRICULTURAL industries ,POLYHEDRA ,DECISION making ,SEED technology - Abstract
Saurabh Bansal and James S. Dyer study a common problem in the commercial agribusiness market, where farmers have a preference for a farm input such as a seed based on a fit with their geographical location but are also willing to accept a closely related substitute. Such consumer-driven choices may not be adequately represented by traditional models that maximize the profit of a firm that seeks to make substitutions while maximizing its profit. They use a set of recent results for evaluations of moments over polyhedra to determine the exact inventory levels a firm should keep of substitutable products. Using proprietary data from a large firm in this domain, they highlight the role of geographical and climate-related factors that affect product substitution in the agribusiness industry and identify specific regions in the United States where product substitution is a source of substantial revenue for firms. In this paper, we consider the problem in which a firm offers a portfolio of products (agricultural seeds) to multiple customer segments comprising farmers under aggressive fill-rate constraints, and some, but not all, customers will accept a substitute to their preferred choice. This business situation is not adequately represented by traditional inventory-management models, where a firm initiates a substitution based on its monetary considerations. By exploiting some recent results on polyhedral expectations, we develop a decomposition-based approach to determine optimal inventory levels for the firm's seed portfolio under aggressive fill-rate targets. The approach provides an exact solution that is implementable in managerial-friendly environments and permits a what-if analysis for real-time decision support. Subsequently we extend the technical development to establish: (i) a simple computable bound on the value of substitution, (ii) a procedure for determining implied penalty costs for substitutable products, and (iii) comparative static results for the product portfolio. We also discuss the implementation of the technical development at a Fortune 100 firm that has resulted in significant monetary savings. Finally, we provide geography- and climate-specific managerial insights for managing seed substitution by end-users. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
29. Analysis of Markov Influence Graphs
- Author
-
Berkhout, Joost and Heidergott, Bernd F.
- Subjects
Markov processes -- Methods -- Reports ,Social networks -- Reports ,Business ,Mathematics - Abstract
The research presented in this paper is motivated by the growing interest in the analysis of networks found in the World Wide Web and of social networks. In this paper, we elaborate on the Kemeny constant as a measure of connectivity of the weighted graph associated with a Markov chain. For finite Markov chains, the Kemeny constant can be computed by means of simple algebra via the deviation matrix and the ergodic projector of the chain. Using this fact, we introduce a new decomposition algorithm for Markov chains that splits the graph the Markov chain is defined on into subgraphs, such that the connectivity of the chain measured by the Kemeny constant is maximally decreased. We discuss applications of our decomposition algorithm to influence ranking in social networks, decomposition of a social network into subnetworks, identification of nearly decomposable chains, and cluster analysis. Keywords: Markov influence graphs * social networks * deviation matrix * Markov multichains * Kemeny decomposition algorithm * nearly decomposable Markov chains, 1. Introduction Consider a directed graph with finite node set S = {1, ..., n} and set of edges E [subset] S x S. Let a Markov chain P be [...]
- Published
- 2019
- Full Text
- View/download PDF
30. Robust Dual Dynamic Programming
- Author
-
Georghiou, Angelos, Tsoukalas, Angelos, and Wiesemann, Wolfram
- Subjects
Dynamic programming -- Methods ,Stochastic programming -- Methods ,Algorithms -- Usage ,DNA polymerases -- Usage ,Algorithm ,Business ,Mathematics - Abstract
Multistage robust optimization problems, where the decision maker can dynamically react to consecutively observed realizations of the uncertain problem parameters, pose formidable theoretical and computational challenges. As a result, the existing solution approaches for this problem class typically determine suboptimal solutions under restrictive assumptions. In this paper, we propose a robust dual dynamic programming (RDDP) scheme for multistage robust optimization problems. The RDDP scheme takes advantage of the decomposable nature of these problems by bounding the costs arising in the future stages through lower and upper cost-to-go functions. For problems with uncertain technology matrices and/or constraint right-hand sides, our RDDP scheme determines an optimal solution in finite time. Also, if the objective function and/or the recourse matrices are uncertain, our method converges asymptotically (but deterministically) to an optimal solution. Our RDDP scheme does not require a relatively complete recourse, and it offers detenriinistic upper and lower bounds throughout the execution of the algorithm. We show the promising performance of our algorithm in a stylized inventory management problem. Keywords: robust optimization * multistage problems * dual dynamic programming * error bounds, 1. Introduction In this paper, we study multistage robust optimization problems of the form [mathematical expression not reproducible] (1) where the parameters [[xi].sub.t] are revealed at the beginriing of stage [...]
- Published
- 2019
- Full Text
- View/download PDF
31. OR Forum--Public Health Preparedness: Answering (Largely Unanswerable) Questions with Operations Research--The 2016-2017 Philip McCord Morse Lecture
- Author
-
Brandeau, Margaret L.
- Subjects
United States. Department of Homeland Security -- Research ,Bioterrorism -- Research ,National security -- Research ,Public health -- Analysis -- Research ,Business ,Mathematics ,Stanford University -- Research - Abstract
Public health security--achieved by effectively preventing, detecting, and responding to events that affect public health such as bioterrorism, disasters, and naturally occurring disease outbreaks--is a key aspect of national security. However, effective public health preparedness depends on answering largely unanswerable questions. For example: What is the chance of a bioterror attack in the United States in the next five years? What is the chance of an anthrax attack? What might be the location and magnitude of such an attack? This paper describes how OR-based analyses can provide insight into complex public health preparedness planning problems--and thus support good decisions. Three examples from the author's research are presented: logistics of response to an anthrax attack, prepositioning of medical countermeasures for anthrax, and stockpiling decisions for the United States' Strategic National Stockpile. Keywords: professional addresses * government planning * ORMS philosophy, 1. Introduction This paper discusses public health preparedness and, in particular, how we can use operations research to help obtain answers to questions that are in many ways unanswerable. Public [...]
- Published
- 2019
- Full Text
- View/download PDF
32. Input-Output Uncertainty Comparisons for Discrete Optimization via Simulation
- Author
-
Song, Eunhye and Nelson, Barry L.
- Subjects
Uncertainty -- Analysis ,Decision-making -- Analysis -- Models ,Mathematical optimization -- Analysis ,Business ,Mathematics - Abstract
When input distributions to a simulation model are estimated from real-world data, they naturally have estimation error causing input uncertainty in the simulation output. If an optimization via simulation (OvS) method is applied that treats the input distributions as 'correct,' then there is a risk of making a suboptimal decision for the real world, which we call input model risk. This paper addresses a discrete OvS (DOvS) problem of selecting the real-world optimal from among a finite number of systems when all of them share the same input distributions estimated from common input data. Because input uncertainty cannot be reduced without collecting additional real-world data--which may be expensive or impossible--a DOvS procedure should reflect the limited resolution provided by the simulation model in distinguishing the real-world optimal solution from the others. In light of this, our input-output uncertainty comparisons (IOU-C) procedure focuses on comparisons rather than selection: it provides simultaneous confidence intervals for the difference between each system's real-world mean and the best mean of the rest with any desired probability, while accounting for both stochastic and input uncertainty. To make the resolution as high as possible (intervals as short as possible) we exploit the common input data effect to reduce uncertainty in the estimated differences. Under mild conditions we prove that the IOU-C procedure provides the desired statistical guarantee asymptotically as the real-world sample size and simulation effort increase, but it is designed to be effective in finite samples. Funding: This study was supported by the National Science Foundation [Grant CMMI-1068473]. Supplemental Material: The electronic companion of this paper is available at https://doi.org/10.1287/opre.2018.1796. Keywords: optimization via simulation under input uncertainty * common-input-data effect * multiple comparisons with the best, 1. Introduction Because of the flexibility of simulation, optimization via simulation (OvS) is a widely accepted tool to improve system performance. Real-world problems typically involve stochastic processes (e.g., demand for [...]
- Published
- 2019
- Full Text
- View/download PDF
33. Dynamic Volunteer Staffing in Multicrop Gleaning Operations
- Author
-
Ata, Baris, Lee, Deishin, and Sonmez, Erkut
- Subjects
United States. Department of Agriculture -- Powers and duties ,Gleaning -- Analysis -- Methods -- Usage ,Food wastes -- Analysis -- Control -- Waste management ,Agricultural laborers -- Practice ,Business ,Mathematics - Abstract
Gleaning programs organize volunteer gleaners to harvest a variety of leftover crops that are donated by farmers for the purpose of feeding food-insecure individuals. Thus, the gleaning process simultaneously reduces food waste and food insecurity. However, the operationalization of this process is challenging because gleaning relies on two uncertain sources of input: the food and labor supplies. The purpose of this paper is to help gleaning organizations increase the (value-weighted) volume of fresh food gleaned by better managing the uncertainties in the gleaning operation. We develop a model to capture the uncertainties in food and labor supplies and seek a dynamic volunteer-staffing policy that maximizes the payoff associated with the amount of food gleaned. The exact analysis of the staffing problem seems intractable. Therefore, we resort to an approximation in the heavy traffic regime. In that regime, we characterize the system dynamics of the multicrop gleaning operation and derive the optimal staffing policy in closed form. The optimal policy is a nested threshold policy that specifies the staffing level for each class of donation (i.e., a donation of a particular crop type and donation size). The policy depends on the number of available gleaners and the backlog of gleaning donations. A numerical study using data calibrated from a gleaning organization in the Boston area shows that the dynamic staffing policy we propose can recover approximately 10% of the volume lost when the gleaning organization uses a static policy. To achieve this improvement, no capital or major process changes would be required--only some small changes to the staffing level requests. History: An earlier version of this paper was circulated with the title 'Dynamic Staffing of Volunteer Gleaning Operations' and is available from SSRN: https://ssrn.com/abstract=2873250 or http://dx.doi.org/10.2139/ssrn.2873250. Funding: This work is supported by the Neubauer Family Foundation at the University of Chicago Booth School of Business. Supplemental Material: The online appendices are available at https://doi.org/10.1287/opre.2018.1792. Keywords: gleaning * volunteer * food bank * food waste * food insecurity * dynamic control, 1. Introduction The practice of gleaning combines two societal problems to create an elegant solution for both. Gleaning dates back to ancient times when landowners allowed the poor to gather [...]
- Published
- 2019
- Full Text
- View/download PDF
34. Partially Observable Markov Decision Processes: A Geometric Technique and Analysis.
- Author
-
Zhang, Hao
- Subjects
DYNAMIC programming ,MARKOV processes ,ALGORITHMS ,COMPUTATIONAL complexity ,MATHEMATICS ,COMPUTER science ,ARTIFICIAL intelligence - Abstract
This paper presents a novel framework for studying partially observable Markov decision processes (POMDPs) with finite state, action, observation sets, and discounted rewards. The new framework is solely based on future-reward vectors associated with future policies, which is more parsimonious than the traditional framework based on belief vectors. It reveals the connection between the POMDP problem and two computational geometry problems, i.e., finding the vertices of a convex hull and finding the Minkowski sum of convex polytopes, which can help solve the POMDP problem more efficiently. The new framework can clarify some existing algorithms over both finite and infinite horizons and shed new light on them. It also facilitates the comparison of POMDPs with respect to their degree of observability, as a useful structural result. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
35. Note: Generalized Notions of Concavity with an Application to Capacity Management.
- Author
-
Semple, John
- Subjects
OPERATIONS research ,INDUSTRIAL capacity ,DYNAMIC programming ,INVENTORY control ,MATHEMATICS ,INTERPOLATION ,NUMERICAL analysis - Abstract
We introduce a generalization of K-concavity termed weak (K
1 , K2 )-concavity and show how it can be used to analyze certain dynamic systems arising in capacity management. We show that weak (K1 , K2 )-concavity has two fundamental properties that are relevant for the analysis of such systems: First, it is preserved for linear interpolations; second, it is preserved for certain types of linear extensions. In capacity management problems where both buying and selling capacity involve a fixed cost plus a proportional cost/revenue term, interpolations and extensions are fundamental building blocks of the optimality analysis. In the context of the capacity management problem studied by Ye and Duenyas (2007), we show that weak (K1 , K2 )-concavity is sufficient to prove the general structure of the optimal policy established in that paper. [ABSTRACT FROM AUTHOR]- Published
- 2007
- Full Text
- View/download PDF
36. Nonconvex Structures in Nonlinear Programming.
- Author
-
Scholtes, Stefan
- Subjects
NONLINEAR programming ,NONCONVEX programming ,NONDIFFERENTIABLE functions ,COMBINATORIAL topology ,MATHEMATICAL optimization - Abstract
Nonsmoothness and nonconvexity in optimization problems often arise because a combinatorial structure is imposed on smooth or convex data. The combinatorial aspect can be explicit, e.g., through the use of "max," ''min," or "if" statements in a model; or implicit, as in the case of bilevel optimization, where the combinatorial structure arises from the possible choices of active constraints in the lower-level problem. In analyzing such problems, it is desirable to decouple the combinatorial aspect from the nonlinear aspect and deal with them separately. This paper suggests a problem formulation that explicitly decouples the two aspects. A suitable generalization of the traditional Lagrangian framework allows an extension of the popular sequential quadratic programming (SQP) methodology to such structurally nonconvex nonlinear programs. We show that the favorable local convergence properties of SQP are retained in this setting and illustrate the potential of the approach in the context of optimization problems with max-min constraints that arise, for example, in robust optimization. [ABSTRACT FROM AUTHOR]
- Published
- 2004
- Full Text
- View/download PDF
37. SEARCHING FOR AN AGENT WHO MAY OR MAY NOT WANT TO BE FOUND.
- Author
-
Alpern, Steve and Gal, Shmuel
- Subjects
MILITARY science ,MATHEMATICS ,INTELLIGENCE officers ,MISSING persons ,SEARCH theory ,OPERATIONS research - Abstract
There is an extensive theory regarding optimal continuous path search for a mobile or immobile 'target.' The traditional theory assumes that the target is one of three types: (i) an object with a known distribution of paths, (ii) a mobile or immobile hider who wants to avoid or delay capture, or (iii) a rendezvouser who wants to find the searcher. This paper introduces a new type of search problem by assuming that aims of the target are not known to the searcher. The target may be either a type (iii) cooperator (with a known cooperation probability c) or a type (ii) evader. This formulation model searches problems like that for a lost teenager who may be a 'runaway,' or a lost intelligence agent who may be a defector. In any given search context, it produces a continuum of search problems T (c), 0 ≤ c ≤ 1, linking a zero-sum search game (with c = 0) to a rendezvous problem (with c = 1). These models thus provide a theoretical bridge between two previously distinct parts of search theory, namely search games and rendezvous search. [ABSTRACT FROM AUTHOR]
- Published
- 2002
- Full Text
- View/download PDF
38. THE GENESIS OF "OPTIMAL INVENTORY POLICY"
- Author
-
Arrow, Kenneth J.
- Subjects
INVENTORIES ,INVENTORY control ,OPERATIONS research ,PRODUCT management ,MILITARY supplies ,MATHEMATICS ,LINEAR statistical models - Abstract
In this article, the author recalls the circumstances leading to his co-authorship of the 1951 paper, "Optimal Inventory Policy." His involvement in inventory research could be described as a confluence of chance events. He had started my graduate study at Columbia University in 1940. His main interest was mathematical statistics. One of the few teachers was Harold Hotelling. However, there was then no Department of Statistics anywhere in the United States. His studies were interrupted by World War II. He volunteered for military service and spent a little over three years on duty as a weather officer. While in the military, he received one of the first documents outlining the idea of sequential statistical analysis, and was excited by it both for practical reasons and for the intellectual horizons it opened up. The Navy had a clear interest in minimizing inventory costs. His team had begun research through a logistics project at George Washington University. The other inventory model was stochastic but static; there was only one period involved.
- Published
- 2002
- Full Text
- View/download PDF
39. Divide and Conquer: Recursive Likelihood Function Integration for Hidden Markov Models with Continuous Latent Variables
- Author
-
Reich, Gregor
- Subjects
Likelihood functions -- Evaluation ,Hidden Markov models -- Evaluation ,Latent variables -- Usage ,Business ,Mathematics - Abstract
This paper develops a method to efficiently estimate hidden Markov models with continuous latent variables using maximum likelihood estimation. To evaluate the (marginal) likelihood function, I decompose the integral over the unobserved state variables into a series of lower dimensional integrals, and recursively approximate them using numerical quadrature and interpolation. I show that this procedure has very favorable numerical properties: First, the computational complexity grows linearly in the number of periods, making the integration over hundreds and thousands of periods feasible. Second, I prove that the numerical error accumulates sublinearly in the number of time periods integrated, so the total error can be well controlled for a very large number of periods using, for example, Gaussian quadrature and Chebyshev polynomials. I apply this method to the bus engine replacement model of Rust [Econometrica 55(5): 999-1033] to verify the accuracy and speed of the procedure in both actual and simulated data sets. Supplemental Material: The e-companion is available at https://doi.org/10.1287/opre.2018.1750. Keywords: hidden Markov models * maximum likelihood estimation * numerical integration * interpolation, 1. Introduction This paper develops a method to efficiently estimate hidden Markov models with continuous latent variables using maximum likelihood estimation (MLE). To evaluate the (marginal) likelihood function, I decompose [...]
- Published
- 2018
- Full Text
- View/download PDF
40. Approximation Algorithms for a Class of Stochastic Selection Problems with Reward and Cost Considerations
- Author
-
Strinka, Zohar M.A. and Romeijn, H. Edwin
- Subjects
Mathematical optimization -- Analysis ,Learning models (Stochastic processes) -- Analysis ,Logistics -- Analysis ,Algorithms -- Analysis ,Algorithm ,Business ,Mathematics - Abstract
We study a class of problems with both binary selection decisions and associated continuous choices that result in stochastic rewards and costs. The rewards are received based on the decision maker's selection, and the costs depend both on the decisions and realizations of the stochastic variables. We consider a family of risk-based objective functions that contains the traditional risk-neutral expected-value objective as a special case. A combination of rounding and sample average approximation is used to produce solutions that are guaranteed to be close to the optimal solution with high probability. We also provide an empirical comparison of the performance of the algorithms on a set of randomly generated instances of a supply chain example problem. The computational results illustrate the theoretical claims in the paper that, for this problem, high-quality solutions can be found with small computational effort. Funding: This research was conducted with government support under and awarded by a Department of Defense (DoD), Air Force Office of Scientific Research, National Defense Science and Engineering Graduate (NDSEG) Fellowship, 32 CFR 168a. Keywords: two-stage stochastic optimization * selection * resource allocation * approximation algorithms, 1. Introduction In this paper we study a class of two-stage stochastic selection problems with recourse and develop approximation algorithms to efficiently solve them. In particular, a subset of options [...]
- Published
- 2018
- Full Text
- View/download PDF
41. An Axiomatic Characterization of a Class of Locations in Tree Networks.
- Author
-
Foster, Dean P. and Vohra, Rakesh V.
- Subjects
AXIOMATIC set theory ,AXIOMS ,MATHEMATICAL analysis ,CONVEX functions ,SOCIAL choice ,RESOURCE allocation ,MATHEMATICS - Abstract
In this paper we describe four axioms that uniquely characterize the class of locations in tree networks that are obtained by minimizing an additively separable, nonnegative, nondecreasing, differentiable, and strictly convex function of distances. This result is analogous to results that have been obtained in the theory of bargaining, social choice, and fair resource allocation. [ABSTRACT FROM AUTHOR]
- Published
- 1998
- Full Text
- View/download PDF
42. AN EXACT SUBLINEAR ALGORITHM FOR THE MAX-FLOW, VERTEX DISJOINT PATHS AND COMMUNICATION PROBLEMS ON RANDOM GRAPHS.
- Author
-
Hochbaum, Dorit S.
- Subjects
ALGORITHMS ,RANDOM graphs ,GRAPH theory ,VERTEX operator algebras ,DISTRIBUTION (Probability theory) ,MATHEMATICS ,MATHEMATICAL optimization - Abstract
This paper describes a randomized algorithm for solving the maximum-flow maximum-cut problem on connected random graphs. The algorithm is very fast--it does not look up most vertices in the graph. Another feature of this algorithm is that it almost surely provides, along with an optimal solution, a proof of optimality of the solution. In addition, the algorithm's solution is, by construction, a collection of vertex-disjoint paths which is maximum. Under a restriction on the graph's density, an optimal solution to the NP-hard communication problem is provided as well, that is, finding a maximum collection of vertex-disjoint paths between sender-receiver pairs of terminals. The algorithm lends itself to a sublogarithmic parallel and distributed implementation. Its effectiveness is demonstrated via extensive empirical study.. [ABSTRACT FROM AUTHOR]
- Published
- 1992
- Full Text
- View/download PDF
43. MILITARY DECISION, GAME THEORY AND INTELLIGENCE: AN ANECDOTE.
- Author
-
Ravid, Itzhak
- Subjects
GAME theory ,MILITARY intelligence ,MILITARY science ,OPERATIONS research ,DECISION making ,MATHEMATICS ,INDUSTRIAL engineering ,SYSTEMS theory ,PROBABILITY theory - Abstract
Some 30 years ago, O. G. Haywood, Jr. published a pioneering paper relating game theory considerations to historical cases of World War II ("Military Decision and Game Theory"). The paper has become one of the most cited in military operations research literature. Facts revealed long after its publication have shed new light upon the historical case and bear upon the core of that paper. We present the story and consider its implications. [ABSTRACT FROM AUTHOR]
- Published
- 1990
- Full Text
- View/download PDF
44. A Note on Linearly Decreasing, Delay-Dependent Non-Preemptive Queue Disciplines.
- Author
-
Bagchi, Uttarayan
- Subjects
QUEUING theory ,LINEAR systems ,STOCHASTIC processes ,MATHEMATICS ,MONTE Carlo method ,OPERATIONS research ,PRODUCTION scheduling ,GAME theory - Abstract
Previous authors have presented expressions for expected waiting time in linearly time-dependent queue disciplines. This paper points out two errors in a paper by Hsu on linearly decreasing priority systems. [ABSTRACT FROM AUTHOR]
- Published
- 1984
- Full Text
- View/download PDF
45. Optimal Whereabouts Search for a Moving Target.
- Author
-
Stone, Lawrence D. and Kadane, Joseph B.
- Subjects
SEARCH & rescue operations ,STATISTICAL decision making ,OPERATIONS research ,MATHEMATICS - Abstract
This paper shows that solving the optimal whereabouts search problem for a moving target is equivalent to solving a finite number of optimal detection problems for moving targets. This generalizes the result of Kadane [1971] for stationary targets. [ABSTRACT FROM AUTHOR]
- Published
- 1981
- Full Text
- View/download PDF
46. A Stochastic Sequential Allocation Model.
- Author
-
Derman, C., Lieberman, C. J., and Ross, S. M.
- Subjects
INVESTMENTS ,CONTINUOUS functions ,PROBABILITY theory ,MATHEMATICS ,POISSON processes ,STATISTICAL correlation ,GAME theory ,FINANCE - Abstract
This paper considers the following model, described in terms of an investment problem. We have D units available for investment. During each of N time periods an opportunity to invest will occur with probability p. As soon as an opportunity presents itself, we must decide how much of our available resources to invest. If we invest y, then we obtain an expected profit P(y), where P is a nondecreasing continuous function. The amount y then becomes unavailable for future investment. The problem is to decide how much to invest at each opportunity so as to maximize total expected profit. When P(y) is a concave function, the structure of the optimal policy is obtained (Section 1). Bounds on the optimal value function and asymptotic results are presented in Section 2. A closed-form expression for the optimal value to invest is found in Section 3 for the special cases of P(y) = log y and P(y) = y[sup α] for 0<α<1. Section 4 presents a continuous-time version of the model, i.e., we assume that opportunities occur in accordance with a Poisson process. Other applications of the model are also considered. [ABSTRACT FROM AUTHOR]
- Published
- 1975
- Full Text
- View/download PDF
47. Generalized Utility Independence and Some Implications.
- Author
-
Fishburn, Peter C. and Keeney, Ralph L.
- Subjects
UTILITY functions ,ECONOMIC demand ,QUASICONFORMAL mappings ,STATISTICAL hypothesis testing ,VALUE (Economics) ,MULTIPLE criteria decision making ,MATHEMATICAL models ,MATHEMATICS - Abstract
This paper introduces the concept of generalized utility independence. Subject to various generalized utility independence assumptions, we derive three functional forms for a multiattribute von Neumann-Margenstern utility function u. These are the additive, the multiplicative, and the quasi-additive forms, each of which expresses u as a combination of utility functions defined on the separate attributes. It is demonstrated that if u is unbounded from above and below, then given the three forms, either reversal of preferences over some attributes occurs or else the additive form must hold. [ABSTRACT FROM AUTHOR]
- Published
- 1975
- Full Text
- View/download PDF
48. Comments on the Distribution of Inventory Position in a Continuous-Review (s,S) Inventory System.
- Author
-
Richards, F. R.
- Subjects
INVENTORIES ,INVENTORY control ,MARKOV processes ,OPERATIONS research ,INDUSTRIAL procurement ,PROBABILITY theory ,MATHEMATICS ,PRODUCT management ,PHYSICAL distribution of goods - Abstract
This paper comments on the limiting distribution of the inventory position in a continuous review inventory system with arbitrary customer interarrival-time distribution. Sivazlian has considered an inventory model of a continuous-review system in which the interarrival times between successive customers are independent and identically distributed with arbitrary density function. The inventory position includes the stock on hand, less any backorders, plus the total amount on order. That the distribution of the inventory position in the unit demand ease is uniform is certainly not surprising. Thus, the embedded Markov chain that describes the process immediately after each demand has uniform distribution. By adjusting these values for the amount of time spent in each state, which has the same distribution regardless of the state, and normalizing, the mathematician obtain the desired result.
- Published
- 1975
- Full Text
- View/download PDF
49. Strong Optimality of the Shoot-Adjust-Shoot Strategy.
- Author
-
Barr, Donald R.
- Subjects
APPROXIMATION theory ,FUNCTIONAL analysis ,STOCHASTIC processes ,STOCHASTIC approximation ,MATHEMATICS ,COORDINATE transformations ,MATHEMATICAL transformations ,OPERATIONS research ,SYSTEMS theory - Abstract
This paper shows that seemingly different 'adjustment' procedures are equivalent, if viewed in appropriate coordinate systems. It extends previous results concerning sequential adjustments that are constrained to be linear functions of observed impact points to the class of translation-invariant procedures. It also reviews properties of the optimal sequential adjustment procedure, including some related to stochastic approximation. [ABSTRACT FROM AUTHOR]
- Published
- 1974
- Full Text
- View/download PDF
50. Technical Note--On the Relation Between Several Discrete Choice Models
- Author
-
Feng, Guiyun, Li, Xiaobo, and Wang, Zizhuo
- Subjects
Management science -- Innovations ,Finite sets -- Usage ,Mathematical optimization -- Usage ,Business ,Mathematics - Abstract
In this paper, we study the relationship between several well known classes of discrete choice models, i.e., the random utility model (RUM), the representative agent model (RAM), and the semiparametric choice model (SCM). Using a welfare-based model as an intermediate, we show that the RAM and the SCM are equivalent. Furthermore, we show that both models as well as the welfare-based model strictly subsume the RUM when there are three or more alternatives, while the four are equivalent when there are only two alternatives. Thus, this paper presents a complete picture of the relationship between these choice models. Funding: The research of the third author is supported by the National Science Foundation [Grant CMMI-1462676]. Keywords: welfare function * random utility model * representative agent model * semiparametric choice model, 1. Introduction In this paper, we study the discrete choice models. Discrete choice models are used to model choices made by people among a finite set of alternatives. As examples, [...]
- Published
- 2017
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.