257 results
Search Results
2. KNN and adaptive comfort applied in decision making for HVAC systems.
- Author
-
Aparicio-Ruiz, Pablo, Barbadilla-Martín, Elena, Guadix, José, and Cortés, Pablo
- Subjects
THERMAL comfort ,DECISION making ,SUPPORT vector machines ,ALGORITHMS ,AIR conditioning ,HEATING & ventilation industry - Abstract
The decision making of a suitable heating, ventilating and air conditioning system's set-point temperature is an energy and environmental challenge in our society. In the present paper, a general framework to define such temperature based on a dynamic adaptive comfort algorithm is proposed. Due to the fact that the thermal comfort of the occupants of a building has different ranges of acceptability, this method is applied to learn such comfort temperature with respect to the running mean temperature and therefore to decide the suitable range of indoor temperature. It is demonstrated that this solution allows to dynamically build an adaptive comfort algorithm, an algorithm based on the human being's thermal adaptability, without applying the traditional theory. The proposed methodology based on the K-Nearest-Neighbour algorithm was tested and compared with data from an experimental thermal comfort field study carried out in a mixed mode building in the south-western area of Spain and with the Support Vector Machine method. The results show that K-Nearest-Neighbour algorithm represents the pattern of thermal comfort data better than the traditional solution and that it is a suitable method to learn the thermal comfort area of a building and to define the set-point temperature for a heating, ventilating and air-conditioning system. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
3. In-house production and outsourcing under different discount schemes on the total outsourcing cost.
- Author
-
Lu, Lingfa, Zhang, Liqi, and Ou, Jinwen
- Subjects
CONTRACTING out ,PRODUCTION scheduling ,APPROXIMATION algorithms ,COST - Abstract
In this paper we consider a coordinated in-house production scheduling and outsourcing model, where a manufacturer might outsource part of the production to a subcontractor so as to achieve a tight production due date. The manufacturer pays a specific outsourcing cost for each job that is outsourced. To encourage the manufacturer to outsource more jobs, the subcontractor provides a specific discount scheme on the total outsourcing cost. Previous studies focus on the balance between in-house production performance and the total outsourcing cost. Our model is the first to investigate the impact on the manufacturer's decision-making under different discount schemes on the total outsourcing cost. Four distinct discount schemes on the total outsourcing cost are studied. We either show the intractability of those problems, or provide efficient exact or approximation algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
4. An exact algorithm for the type-constrained and variable sized bin packing problem.
- Author
-
Chunyang Zhou, Chongfeng Wu, and Yun Feng
- Subjects
ALGORITHMS ,COMBINATORIAL probabilities ,BINS ,ALGEBRA ,PROBABILITY theory ,BULK solids handling ,OPERATIONS research - Abstract
In this paper, we introduce an additional constraint to the one-dimensional variable sized bin packing problem. Practically, some of items have to be packed separately in different bins due to their specific requirement. Therefore, these items are labelled as different types. The bins can be used to pack either any type of items if they are empty originally or the same type of items as what they already have. We model the problem as a type-constrained and variable sized bin packing problem (TVSBPP), and solve it via a branch and bound method. An efficient backtracking procedure is proposed to improve the efficiency of the algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2009
- Full Text
- View/download PDF
5. AN EXACT ALGORITHM FOR SOLVING A CAPACITATED LOCATION-ROUTING PROBLEM.
- Author
-
Laporte, G., Nobert, Y., and Arpin, D.
- Subjects
ALGORITHMS ,TERMINALS (Transportation) ,INTEGER programming ,MATHEMATICAL programming ,LINEAR programming ,OPERATING costs ,OPERATIONS research - Abstract
In location-routing problems, the objective is to locate one or many depots within a set of sites (representing customer locations or cities) and to construct delivery routes from the selected depot or depots to the remaining sites at least system cost. The objective function is the sum of depot operating costs, vehicle acquisition costs and routing costs. This paper considers one such problem m which a weight is assigned to each site and where sites are to be visited by vehicles having a given capacity. The solution must be such that the sum of the weights of sites visited on any given route does not exceed the capacity of the visiting vehicle. The formulation of an integer linear program for this problem involves degree constraints generalized subtour elimination constraints, and chain baring constraints. An exact algorithm, using initial relaxation of most of the problem constraints, is presented which is capable of solving problems with up to twenty sites within a reasonable number of iterations. [ABSTRACT FROM AUTHOR]
- Published
- 1986
6. On scheduling with the non-idling constraint
- Author
-
Chrétienne, Philippe
- Published
- 2016
- Full Text
- View/download PDF
7. On the geometry, preemptions and complexity of multiprocessor and shop scheduling.
- Author
-
Shchepin, Evgeny V. and Vakhania, Nodari
- Subjects
GEOMETRY ,MULTIPROCESSORS ,PRODUCTION scheduling ,POLYNOMIALS ,ACYCLIC model ,ALGORITHMS - Abstract
In this paper we study multiprocessor and open shop scheduling problems from several points of view. We explore a tight dependence of the polynomial solvability/intractability on the number of allowed preemptions. For an exhaustive interrelation, we address the geometry of problems by means of a novel graphical representation. We use the so-called preemption and machine-dependency graphs for preemptive multiprocessor and shop scheduling problems, respectively. In a natural manner, we call a scheduling problem acyclic if the corresponding graph is acyclic. There is a substantial interrelation between the structure of these graphs and the complexity of the problems. Acyclic scheduling problems are quite restrictive; at the same time, many of them still remain NP-hard. We believe that an exhaustive study of acyclic scheduling problems can lead to a better understanding and give a better insight of general scheduling problems. We show that not only acyclic but also a special non-acyclic version of periodic job-shop scheduling can be solved in polynomial (linear) time. In that version, the corresponding machine dependency graph is allowed to have a special type of the so-called parti-colored cycles. We show that trivial extensions of this problem become NP-hard. Then we suggest a linear-time algorithm for the acyclic open-shop problem in which at most m−2 preemptions are allowed, where m is the number of machines. This result is also tight, as we show that if we allow one less preemption, then this strongly restricted version of the classical open-shop scheduling problem becomes NP-hard. In general, we show that very simple acyclic shop scheduling problems are NP-hard. As an example, any flow-shop problem with a single job with three operations and the rest of the jobs with a single non-zero length operation is NP-hard. We suggest linear-time approximation algorithm with the worst-case performance of $\|\mathcal{M}\|+2\|\mathcal{J}\|$ ( $\|\mathcal{M}\|+\|\mathcal{J}\|$ , respectively) for acyclic job-shop (open-shop, respectively), where $\|\mathcal{J}\|$ (|ℳ|, respectively) is the maximal job length (machine load, respectively). We show that no algorithm for scheduling acyclic job-shop can guarantee a better worst-case performance than $\|\mathcal{M}\|+\|\mathcal{J}\|$ . We consider two special cases of the acyclic job-shop with the so-called short jobs and short operations (restricting the maximal job and operation length) and solve them optimally in linear time. We show that scheduling m identical processors with at most m−2 preemptions is NP-hard, whereas a venerable early linear-time algorithm by McNaughton yields m−1 preemptions. Another multiprocessor scheduling problem we consider is that of scheduling m unrelated processors with an additional restriction that the processing time of any job on any machine is no more than the optimal schedule makespan C . We show that the (2 m−3)-preemptive version of this problem is polynomially solvable, whereas the (2 m−4)-preemptive version becomes NP-hard. For general unrelated processors, we guarantee near-optimal (2 m−3)-preemptive schedules. The makespan of such a schedule is no more than either the corresponding non-preemptive schedule makespan or max { C , p
max }, where C is the optimal (preemptive) schedule makespan and pmax is the maximal job processing time. [ABSTRACT FROM AUTHOR]- Published
- 2008
- Full Text
- View/download PDF
8. AN OPTIMAL MINIMAX ALGORITHM.
- Author
-
Gilbert, E. N.
- Subjects
VIDEO games ,VIDEO game development ,ELECTRONIC games ,MATRICES (Mathematics) ,RANDOM variables - Abstract
Computer game-playing programs repeatedly calculate minimax elements μ = mi
i maxj Mij of large payoff matrices Mij . A straightforward row-by-row calculation of μ scans rows of Mij one at a time, skipping to a new row whenever an element is encountered that exceeds a current minimax. An optimal calculation, derived here, scans the matrix more erratically but finds ii after testing the fewest possible matrix elements. Minimizing the number of elements tested is reasonable when elements must be computed as needed by evaluating future game positions. This paper obtains the expected number of tests required when the elements are independent, identically distributed, random variables. For matrices 50 by 50 or smaller, the expected number of tests required by the row-by-row calculation can be at most 42% greater than the number for the optimal calculation. When the numbers R, C of rows and columns are very large, both calculations require an expected number of tests near RC/InR. [ABSTRACT FROM AUTHOR]- Published
- 1985
- Full Text
- View/download PDF
9. Research on financial management of Guangdong-Hong Kong-macao greater Bay Area based on LS-SVM algorithm and multi-model fusion
- Author
-
Liu Yixin and Zhang Miao
- Subjects
Financial management ,Support vector machine ,CUDA ,business.industry ,Theory of computation ,Financial analysis ,Process (computing) ,General Decision Sciences ,Construct (python library) ,Management Science and Operations Research ,business ,Algorithm ,Hausman test - Abstract
In order to study the financial management of the Guangdong-Hong Kong-Macao Greater Bay Area, this paper builds a financial analysis model based on the LS-SVM algorithm and analyzes the implementation process of the LS-SVM algorithm and its modeling process. Moreover, this paper conducts research on CUDA-based GPU high-performance computing methods, designs and implements the LS-SVM algorithm on the CUDA-based GPU platform, and compares and analyzes the performance before and after optimization. In addition, this paper combines the improved algorithm to construct the financial analysis model and verifies the effect of the method proposed in this paper through actual data analysis. The research results prove the effectiveness of this method. The Hausman test achieves the probability of 0.1005 and 9.60182558 chi-square statistics.
- Published
- 2021
10. Comprehensive analysis of gradient-based hyperparameter optimization algorithms
- Author
-
Oleg Bakhteev and Vadim V. Strijov
- Subjects
Hyperparameter ,021103 operations research ,Artificial neural network ,Computer science ,Model selection ,0211 other engineering and technologies ,Stability (learning theory) ,General Decision Sciences ,02 engineering and technology ,Management Science and Operations Research ,Overfitting ,Statistics::Machine Learning ,Random search ,Hyperparameter optimization ,Gradient descent ,Algorithm - Abstract
The paper investigates hyperparameter optimization problem. Hyperparameters are the parameters of model parameter distribution. The adequate choice of hyperparameter values prevents model overfit and allows it to obtain higher predictive performance. Neural network models with large amount of hyperparameters are analyzed. The hyperparameter optimization for models is computationally expensive. The paper proposes modifications of various gradient-based methods to simultaneously optimize many hyperparameters. The paper compares the experiment results with the random search. The main impact of the paper is hyperparameter optimization algorithms analysis for the models with high amount of parameters. To select precise and stable models the authors suggest to use two model selection criteria: cross-validation and evidence lower bound. The experiments show that the models optimized using the evidence lower bound give higher error rate than the models obtained using cross-validation. These models also show greater stability when data is noisy. The evidence lower bound usage is preferable when the model tends to overfit or when the cross-validation is computationally expensive. The algorithms are evaluated on regression and classification datasets.
- Published
- 2019
11. Models and computational algorithms for maritime risk analysis: a review
- Author
-
Taofeek Biobaku, Jaeyoung Cho, Hamid R. Parsaei, Gino J. Lim, and Selim Bora
- Subjects
Risk analysis ,021110 strategic, defence & security studies ,Computer science ,0211 other engineering and technologies ,General Decision Sciences ,020101 civil engineering ,02 engineering and technology ,Management Science and Operations Research ,GeneralLiterature_MISCELLANEOUS ,0201 civil engineering ,Risk analysis (engineering) ,Maritime industry ,Terrorism ,Damages ,Risk assessment ,Algorithm - Abstract
Due to the undesirable implications of maritime mishaps such as ship collisions and the consequent damages to maritime property; the safety and security of waterways, ports and other maritime assets are of the utmost importance to authorities and researches. Terrorist attacks, piracy, accidents and environmental damages are some of the concerns. This paper provides a detailed literature review of over 180 papers about different threats, their consequences pertinent to the maritime industry, and a discussion on various risk assessment models and computational algorithms. The methods are then categorized into three main groups: statistical, simulation and optimization models. Corresponding statistics of papers based on year of publication, region of case studies and methodology are also presented.
- Published
- 2018
12. An exact algorithm for the type-constrained and variable sized bin packing problem
- Author
-
Zhou, Chunyang, Wu, Chongfeng, and Feng, Yun
- Published
- 2009
- Full Text
- View/download PDF
13. Characterisation of the output process of a discrete-time GI / D / 1 queue, and its application to network performance
- Author
-
Herwig Bruneel, Bart Steyaert, and Sabine Wittevrongel
- Subjects
021103 operations research ,Markov chain ,Computer science ,Network packet ,Distributed computing ,0211 other engineering and technologies ,General Decision Sciences ,02 engineering and technology ,Management Science and Operations Research ,01 natural sciences ,010104 statistics & probability ,Discrete time and continuous time ,Burstiness ,Network performance ,0101 mathematics ,Routing (electronic design automation) ,Queue ,Algorithm ,Dimensioning - Abstract
In this paper we use the burst factor of a packet stream, which is defined in a general setting, to quantify the long-term variability, or burstiness, of such a stream. We briefly review some existing results to show that this parameter plays an important role in the performance assessment and dimensioning of buffers in network nodes, even in a non-Markovian setting. We then focus on the calculation of this parameter at the egress of a discrete-time GI / D / 1 queueing system, considering different routing scenarios, and show how it can be expressed in terms of the parameters that characterise the arrival process in such a queue. In addition, we demonstrate how these results can be applied to evaluate the buffer performance in the subsequent nodes of a network. The analytic results that are derived throughout this paper are supported by simulations.
- Published
- 2015
14. A new strongly competitive group testing algorithm with small sequentiality
- Author
-
Yongxi Cheng, Ding-Zhu Du, and Feifeng Zheng
- Subjects
Set (abstract data type) ,Binary splitting ,Sample (material) ,Theory of computation ,General Decision Sciences ,Management Science and Operations Research ,Algorithm ,Upper and lower bounds ,Group testing ,Fault detection and isolation ,Multistage testing ,Mathematics - Abstract
In many fault detection problems, we want to identify all defective items from a sample set of items using the minimum number of tests. Group testing is for the scenario where each test is performed on a subset of items, and tells whether the subset contains at least one defective item or not. In practice, the number of defective items in the sample set is usually unknown. In this paper, we investigate new algorithms for the group testing problem with unknown number of defective items. We consider the scenario where the performance of a group testing algorithm is measured by two criteria: the primary criterion is the number of tests performed, which measures the total cost spent; and the secondary criterion is the number of stages the algorithm works in, which is referred to as the sequentiality of the algorithm in this paper and measures the minimum amount of time required by using the algorithm to identify all the defective items. We present a new algorithm Recursive Binary Splitting (RBS) for the above group testing problem with unknown number of defective items, and prove an upper bound on the number of tests required by RBS. The computational results show that RBS exhibits very good practical performance, measured in terms of both the above two criteria.
- Published
- 2014
15. Interval reliability for aggregated Markov repairable system with repair time omission
- Author
-
Baoliang Liu, Yanqing Wen, and Lirong Cui
- Subjects
Set (abstract data type) ,Downtime ,Markov chain ,Computer science ,General Decision Sciences ,Applied mathematics ,Interval (mathematics) ,State (functional analysis) ,Management Science and Operations Research ,Markov model ,Algorithm ,Reliability (statistics) - Abstract
In this paper, Markov models of repairable systems with repair time omission are considered whose finite state space is grouped into two sets, the set of working states, W, and the set of failed states, F. If the system enters failed states from a working state at any instance, and sojourns at the failed states F less than a given nonnegative critical value τ, then the repair interval can be omitted from downtime records. Otherwise, If the system enters failed states from a working state at any instance, and sojourns at the failed states F more than the given nonnegative critical value τ, then the repair interval cannot be omitted from downtime records. In terms of the assumption, a new model is developed. The focus of attention is the new model’s availability, interval reliability and interval unreliability. Several results are derived for these reliability indexes for the new model. Some special cases and numerical examples are given to illustrate the results obtained by using Maple software in the paper.
- Published
- 2013
16. Timetable construction: the algorithms and complexity perspective
- Author
-
Jeffrey H. Kingston
- Subjects
Operations research ,Efficient algorithm ,Computer science ,Theory of computation ,Bipartite graph ,General Decision Sciences ,Management Science and Operations Research ,Neighbourhood (mathematics) ,Algorithm - Abstract
This paper advocates approaching timetable construction from the algorithms and complexity perspective, in which analysis of the specific problem under study is used to find efficient algorithms for some of its aspects, or to relate it to other problems. Examples are given of problem analyses leading to relaxations, phased approaches, very large-scale neighbourhood searches, bipartite matchings, ejection chains, and connections with standard NP-complete problems. Although a thorough treatment is not possible in a paper of this length, it is hoped that the examples will encourage timetabling researchers to explore further with a view to utilising some of the techniques in their own work.
- Published
- 2012
17. To lay out or not to lay out?
- Author
-
Sadegh Niroomand, Béla Vizvári, and Szabolcs Takács
- Subjects
Theoretical computer science ,Computer science ,Quadratic assignment problem ,Theory of computation ,Euclidean geometry ,General Decision Sciences ,Combinatorial optimization ,Management Science and Operations Research ,Space (commercial competition) ,Integer programming ,Algorithm ,Travelling salesman problem ,Field (computer science) - Abstract
The Quadratic Assignment Problem (QAP) is known as one of the most difficult problems within combinatorial optimization. It is used to model many practical problems including different layout problems. The main topic of this paper is to provide methods to check whether a particular instance of the QAP is a layout problem. An instance is a layout problem if the distances of the objects can be reconstructed on the plane and/or in the 3-dimensional space. A new mixed integer programming model is suggested for the case if the distances of the objects are supposed to be rectilinear distances. If the distances are Euclidean distances then the use of the well-known Multi-Dimensional Scaling (MDS) method of statistics is suggested for reconstruction purposes. The well-known difficulty of QAP makes it a popular and suitable experimental field for many algorithmic ideas including artificial intelligence methods. These types of results are published sometimes as layout problems. The methods of reconstruction can be used to decide whether the topic of a paper is layout or only general QAP. The issue what the OR community should expect from AI based algorithms, is also addressed.
- Published
- 2011
18. Analysis of stochastic problem decomposition algorithms in computational grids
- Author
-
Andres Ramos, Jesus M. Latorre, Santiago Cerisola, and Rafael Palacios
- Subjects
Mathematical optimization ,Optimization problem ,Mathematical problem ,Linear programming ,Computer science ,General Decision Sciences ,Management Science and Operations Research ,computer.software_genre ,Grid ,Stochastic programming ,Grid computing ,Theory of computation ,Decomposition method (constraint satisfaction) ,computer ,Algorithm - Abstract
Stochastic programming usually represents uncertainty discretely by means of a scenario tree. This representation leads to an exponential growth of the size of stochastic mathematical problems when better accuracy is needed. Trying to solve the problem as a whole, considering all scenarios together, yields to huge memory requirements that surpass the capabilities of current computers. Thus, decomposition algorithms are employed to divide the problem into several smaller subproblems and to coordinate their solution in order to obtain the global optimum. This paper analyzes several decomposition strategies based on the classical Benders decomposition algorithm, and applies them in the emerging computational grid environments. Most decomposition algorithms are not able to take full advantage of all the computing power available in a grid system because of unavoidable dependencies inherent to the algorithms. However, a special decomposition method presented in this paper aims at reducing dependency among subproblems, to the point where all the subproblems can be sent simultaneously to the grid. All algorithms have been tested in a grid system, measuring execution times required to solve standard optimization problems and a real-size hydrothermal coordination problem. Numerical results are shown to confirm that this new method outperforms the classical ones when used in grid computing environments.
- Published
- 2008
19. On the convergence of the generalized Weiszfeld algorithm
- Author
-
Zvi Drezner
- Subjects
Euclidean distance ,Mathematical optimization ,Demand point ,Saddle point ,Convergence (routing) ,Theory of computation ,Mathematics::Metric Geometry ,General Decision Sciences ,Weber problem ,Function (mathematics) ,Management Science and Operations Research ,Algorithm ,Mathematics - Abstract
In this paper we consider Weber-like location problems. The objective function is a sum of terms, each a function of the Euclidean distance from a demand point. We prove that a Weiszfeld-like iterative procedure for the solution of such problems converges to a local minimum (or a saddle point) when three conditions are met. Many location problems can be solved by the generalized Weiszfeld algorithm. There are many problem instances for which convergence is observed empirically. The proof in this paper shows that many of these algorithms indeed converge.
- Published
- 2008
20. A Beam Search approach for the optimization version of the Car Sequencing Problem
- Author
-
Joaquín Bautista, Jordi Pereira, and Belarmino Adenso-Díaz
- Subjects
Mathematical optimization ,Optimization problem ,Computer science ,General Decision Sciences ,Management Science and Operations Research ,Counting problem ,Cutting stock problem ,Constraint satisfaction dual problem ,Theory of computation ,Constraint programming ,Beam search ,Computational problem ,Algorithm ,Constraint (mathematics) - Abstract
The Car Sequencing Problem (CSP) is a feasibility problem that has attracted the attention of the Constraint Programming community for a number of years now. In this paper, a new version (opt-CSP) that extends the original problem is defined, converting this into an optimization problem in which the goal is to satisfy the typical hard constraints. This paper presents a solution procedure for opt-CSP using Beam Search. Computational results are presented using public instances that verify the goodness of the procedure and demonstrate its excellent performance in obtaining feasible solutions for the majority of instances while satisfying the new constraints.
- Published
- 2007
21. A two pass heuristic algorithm for scheduling ‘blocked out’ units in continuous process industry
- Author
-
Subir Bhattacharya and Sumit Kumar Bose
- Subjects
Mathematical optimization ,Branch and bound ,Heuristic (computer science) ,Computer science ,Heuristic ,Process (computing) ,General Decision Sciences ,Management Science and Operations Research ,Scheduling (computing) ,Product (mathematics) ,Depth-first search ,Heuristics ,Algorithm ,Integer (computer science) - Abstract
This paper addresses the problem of scheduling cascaded ‘blocked out’ continuous processing units separated by finite capacity storage tanks. The raw materials for the product lines arrive simultaneously on the input side of the first unit. But every unit can process only one product line at a time, thus giving rise to the possibility of spillage of raw material due to limited storage capacity. The need to process multiple product lines and the added constraint of multiple intermediate upliftment dates aggravate the problem. This problem is quite common in petrochemical industry. The paper provides a MINLP (Mixed Integer Non-Linear Programming) formulation of the problem. However, for any realistic scheduling horizon, the size of the problem is too large to be solved by standard packages. We have proposed a depth first branch and bound algorithm, guided by heuristics, to help planners in tackling the problem. The suggested algorithm could output near optimal solutions for scheduling horizons of 30 time periods when applied to real life situations involving 3 units and 3 product lines.
- Published
- 2007
22. A machine learning approach to algorithm selection for $\mathcal{NP}$ -hard optimization problems: a case study on the MPE problem
- Author
-
William H. Hsu and Haipeng Guo
- Subjects
Optimization problem ,business.industry ,General Decision Sciences ,Order (ring theory) ,Management Science and Operations Research ,Probabilistic inference ,Machine learning ,computer.software_genre ,Algorithm Selection ,Theory of computation ,Artificial intelligence ,Overall performance ,Experimental methods ,business ,Algorithm ,computer ,Mathematics - Abstract
Given one instance of an \(\mathcal{NP}\) -hard optimization problem, can we tell in advance whether it is exactly solvable or not? If it is not, can we predict which approximate algorithm is the best to solve it? Since the behavior of most approximate, randomized, and heuristic search algorithms for \(\mathcal{NP}\) -hard problems is usually very difficult to characterize analytically, researchers have turned to experimental methods in order to answer these questions. In this paper we present a machine learning-based approach to address the above questions. Models induced from algorithmic performance data can represent the knowledge of how algorithmic performance depends on some easy-to-compute problem instance characteristics. Using these models, we can estimate approximately whether an input instance is exactly solvable or not. Furthermore, when it is classified as exactly unsolvable, we can select the best approximate algorithm for it among a list of candidates. In this paper we use the MPE (most probable explanation) problem in probabilistic inference as a case study to validate the proposed methodology. Our experimental results show that the machine learning-based algorithm selection system can integrate both exact and inexact algorithms and provide the best overall performance comparing to any single candidate algorithm.
- Published
- 2007
23. Evaluation of choice set generation algorithms for route choice models
- Author
-
M. Scott Ramming, Shlomo Bekhor, and Moshe Ben-Akiva
- Subjects
Travel time ,Estimation ,Data set ,Route generation ,Choice set ,Theory of computation ,Economics ,Information system ,General Decision Sciences ,Management Science and Operations Research ,Algorithm - Abstract
This paper discusses choice set generation and route choice model estimation for large-scale urban networks. Evaluating the effectiveness of Advanced Traveler Information Systems (ATIS) requires accurate models of how drivers choose routes based on their aware- ness of the roadway network and their perceptions of travel time. Many of the route choice models presented in the literature pay little attention to empirical estimation and validation procedures. In this paper, a route choice data set collected in Boston is described and the ability of several different route generation algorithms to produce paths similar to those ob- served in the survey is analyzed. The paper also presents estimation results of some route choice models recently developed using the data set collected.
- Published
- 2006
24. A new approach based on the surrogating method in the project time compression problems
- Author
-
Hadi Mohammadi Bidhandi
- Subjects
Mathematical optimization ,Linear programming ,Computer science ,Time compression ,Decomposition (computer science) ,General Decision Sciences ,Management Science and Operations Research ,Benders' decomposition ,Type (model theory) ,Algorithm ,Integer (computer science) - Abstract
This paper develops a mathematical model for project time compression problems in CPM/PERT type networks. It is noted this formulation of the problem will be an adequate approximation for solving the time compression problem with any continuous and non-increasing time-cost curve. The kind of this model is Mixed Integer Linear Program (MILP) with zero-one variables, and the Benders' decomposition procedure for analyzing this model has been developed. Then this paper proposes a new approach based on the surrogating method for solving these problems. In addition, the required computer programs have been prepared by the author to execute the algorithm. An illustrative example solved by the new algorithm, and two methods are compared by several numerical examples. Computational experience with these data shows the superiority of the new approach.
- Published
- 2006
25. Algorithms for the optimum communication spanning tree problem
- Author
-
Prabha Sharma
- Subjects
Discrete mathematics ,K-ary tree ,Spanning tree ,Shortest-path tree ,General Decision Sciences ,Management Science and Operations Research ,Minimum spanning tree ,k-minimum spanning tree ,Connected dominating set ,Combinatorics ,Distributed minimum spanning tree ,Euclidean minimum spanning tree ,Algorithm ,Mathematics - Abstract
Optimum Communication Spanning Tree Problem is a special case of the Network Design Problem. In this problem given a graph, a set of requirements r ij and a set of distances d ij for all pair of nodes (i,j), the cost of communication for a pair of nodes (i,j), with respect to a spanning tree T is defined as r ij times the length of the unique path in T, that connects nodes i and j. Total cost of communication for a spanning tree is the sum of costs for all pairs of nodes of G. The problem is to construct a spanning tree for which the total cost of communication is the smallest among all the spanning trees of G. The problem is known to be NP-hard. Hu (1974) solved two special cases of the problem in polynomial time. In this paper, using Hu’s result the first algorithm begins with a cut-tree by keeping all d ij equal to the smallest d ij . For arcs (i,j) which are part of this cut-tree the corresponding d ij value is increased to obtain a near optimal communication spanning tree in pseudo-polynomial time. In case the distances d ij satisfy a generalised triangle inequality the second algorithm in the paper constructs a near optimum tree in polynomial time by parametrising on the r ij .
- Published
- 2006
26. Analysis of multiserver retrial queueing system: A martingale approach and an algorithm of solution
- Author
-
Vyacheslav M. Abramov
- Subjects
Mathematical optimization ,Exponential distribution ,Probability (math.PR) ,60K25, 60H30 ,Stochastic calculus ,General Decision Sciences ,Management Science and Operations Research ,Point process ,Computer Science::Performance ,Server ,FOS: Mathematics ,Ergodic theory ,Martingale (probability theory) ,Queue ,Random variable ,Algorithm ,Mathematics - Probability ,Mathematics - Abstract
The paper studies a multiserver retrial queueing system with $m$ servers. Arrival process is a point process with strictly stationary and ergodic increments. A customer arriving to the system occupies one of the free servers. If upon arrival all servers are busy, then the customer goes to the secondary queue, orbit, and after some random time retries more and more to occupy a server. A service time of each customer is exponentially distributed random variable with parameter $\mu_1$. A time between retrials is exponentially distributed with parameter $\mu_2$ for each customer. Using a martingale approach the paper provides an analysis of this system. The paper establishes the stability condition and studies a behavior of the limiting queue-length distributions as $\mu_2$ increases to infinity. As $\mu_2\to\infty$, the paper also proves the convergence of appropriate queue-length distributions to those of the associated `usual' multiserver queueing system without retrials. An algorithm for numerical solution of the equations, associated with the limiting queue-length distribution of retrial systems, is provided., Comment: To appear in "Annals of Operations Research" 141 (2006) 19-52. Replacement corrects a small number of misprints
- Published
- 2006
27. LSSPER: Solving the Resource-Constrained Project Scheduling Problem with Large Neighbourhood Search
- Author
-
Mireille Palpant, Philippe Michelon, and Christian Artigues
- Subjects
Scheme (programming language) ,Mathematical optimization ,Heuristic (computer science) ,business.industry ,General Decision Sciences ,Management Science and Operations Research ,Resolution (logic) ,Project scheduling problem ,Theory of computation ,Constraint programming ,Local search (optimization) ,business ,Heuristics ,Algorithm ,computer ,Mathematics ,computer.programming_language - Abstract
This paper presents the Local Search with SubProblem Exact Resolution (LSSPER) method based on large neighbourhood search for solving the resource-constrained project scheduling problem (RCPSP). At each step of the method, a subpart of the current solution is fixed while the other part defines a subproblem solved externally by a heuristic or an exact solution approach (using either constraint programming techniques or mathematical programming techniques). Hence, the method can be seen as a hybrid scheme. The key point of the method deals with the choice of the subproblem to be optimized. In this paper, we investigate the application of the method to the RCPSP. Several strategies for generating the subproblem are proposed. In order to evaluate these strategies, and, also, to compare the whole method with current state-of-the-art heuristics, extensive numerical experiments have been performed. The proposed method appears to be very efficient.
- Published
- 2004
28. Amortized Random Backtracking
- Author
-
Olivier Lhomme
- Subjects
Amortized analysis ,Optimization problem ,Backtracking ,Search algorithm ,Theory of computation ,Local consistency ,Probabilistic logic ,General Decision Sciences ,Overhead (computing) ,Management Science and Operations Research ,Algorithm ,Mathematics - Abstract
Some nonsystematic search algorithms can deal with partial assignments of variables, and then can use constraint propagation techniques. Let us call them NSPA algorithms (Nonsystematic Search with Partial Assignments). For satisfiability or optimization problems, such NSPA algorithms scale a lot better than systematic algorithms. We show in this paper that naive NSPA algorithms have to pay a severe overhead due to the way they visit partial assignments. Amortizing the visits of partial assignments is an important feature which we introduce and analyze in this paper. We also propose a new NSPA algorithm that is amortized: it is called Amortized Random Backtracking, and performs a probabilistic exploration of the search space. It can be seen as an amortized version of iterative sampling and has given very good experimental results on a real life time tabling problem.
- Published
- 2004
29. Improved algorithms for proportionate flow shop scheduling with due-window assignment
- Author
-
Jin Qian and Haiyan Han
- Subjects
Theory of computation ,General Decision Sciences ,Assignment methods ,Window (computing) ,Flow shop scheduling ,Management Science and Operations Research ,Binary logarithm ,Algorithm ,Mathematics - Abstract
In a recent study, Sun et al. (AOR 292:113–131, 2020) studied due-window proportionate flow shop scheduling problems with position-dependent weights. For common due-window (denoted by CONW) and slack due-window (denoted by SLKW) assignment methods, they proved that these two problems can be solved in $$O(n^2\log n)$$ time respectively, where n is the number of jobs. In this paper, we consider the same problems, and our contribution is that the CONW problem can be optimally solved by a lower-order algorithm, which runs in $$O(n\log n)$$ time, implying an improvement of a factor of n.
- Published
- 2021
30. [Untitled]
- Author
-
Ya-xiang Yuan and Jiawang Nie
- Subjects
Reduction (complexity) ,Semidefinite programming ,Predictor–corrector method ,Quadratic equation ,Line search ,Conjugate gradient method ,General Decision Sciences ,Function (mathematics) ,Management Science and Operations Research ,Constant (mathematics) ,Algorithm ,Mathematics - Abstract
Recently, we have extended SDP by adding a quadratic term in the objective function and give a potential reduction algorithm using NT directions. This paper presents a predictor–corrector algorithm using both Dikin-type and Newton centering steps and studies properties of Dikin-type step. In this algorithm, when the condition K(XS) is less than a given number K0, we use Dikin-type step. Otherwise, Newton centering step is taken. In both cases, step-length is determined by line search. We show that at least a constant reduction in the potential function is guaranteed. Moreover the algorithm is proved to terminate in O\((\sqrt n \)log (1/e)) steps. In the end of this paper, we discuss how to compute search direction (ΔX,ΔS) using the conjugate gradient method.
- Published
- 2001
31. [Untitled]
- Author
-
Yves Dallery and H. Le Bihan
- Subjects
Production line ,Reliability theory ,Exponential distribution ,Computer science ,Theory of computation ,General Decision Sciences ,Decomposition method (queueing theory) ,Management Science and Operations Research ,Algorithm ,Exponential function - Abstract
We consider production lines consisting of a series of machines separated by finite buffers. The processing time of each machine is deterministic and all the machines have the same processing time. All machines are subject to failures. As usually the case for production systems we assume that the failures are operation dependent. Moreover, we assume that the time to failure and the time to repair are exponentially distributed. To analyze such systems, an efficient decomposition procedure has been proposed by Gershwin and al. In general, this method provides fairly accurate results. There are however cases for which the accuracy of this decomposition method may not be so good. This is the case when the reliability parameters (mean times to failure and mean times to repair) of the different machines have different order of magnitudes. Such a situation may be encountered in real production lines. The purpose of this paper is to propose an improvement of Gershwin's original decomposition method that provides accurate results even in the above mentioned situation. The basic difference between the decomposition method presented in this paper with that of Gershwin is that the times to repair of the equivalent machines are modeled as generalized exponential distributions instead of exponential distributions. This allows us to use a two-moment approximation instead of a one-moment approximation of the repair time distributions of these equivalent machines. The new method is presented in the context of the continuous flow model. However, it is readily applicable to the synchronous model.
- Published
- 2000
32. [Untitled]
- Author
-
Marcus Randall and David Abramson
- Subjects
Mathematical optimization ,Quadratic equation ,Bin packing problem ,Graph colouring ,Simulated annealing ,Theory of computation ,Code (cryptography) ,General Decision Sciences ,Management Science and Operations Research ,Algorithm ,Travelling salesman problem ,Mathematics ,Integer (computer science) - Abstract
This paper explores the use of simulated annealing (SA) for solving arbitrary combinatorialoptimisation problems. It reviews an existing code called GPSIMAN for solving0‐1 problems, and evaluates it against a commercial branch‐and‐bound code, OSL. Theproblems tested include travelling salesman, graph colouring, bin packing, quadratic assignmentand generalised assignment. The paper then describes a technique for representingthese problems using arbitrary integer variables, and shows how a general simulated annealingalgorithm can also be applied. This new code, INTSA, outperforms GPSIMAN andOSL on almost all of the problems tested.
- Published
- 1999
33. [Untitled]
- Author
-
Lea Friedman, Israel David, and Zilla Sinuany-Stern
- Subjects
Mathematical optimization ,Markov chain ,Heuristic ,Theory of computation ,General Decision Sciences ,Probability distribution ,Observable ,Observability ,Management Science and Operations Research ,Algorithm ,Realization (systems) ,Action (physics) ,Mathematics - Abstract
We suggest a heuristic solution procedure for Partially Observable Markov DecisionProcesses with finite action space and finite state space with infinite horizon. The algorithmis a fast, very simple general heuristic; it is applicable for multiple states (not necessarilyordered) multiple actions and various distribution functions. The quality of the algorithm ischecked in this paper against existing analytical and empirical results for two specific modelsof machine replacement. One model refers to the case of two‐action and two‐system stateswith uniform observations (Grosfeld‐Nir [4]), and the other model refers to a case of manyordered states with binomial observations (Sinuany‐Stern et al. [11]). The paper also presentsthe model realization for various probability distribution functions applied to maintenanceand quality control.
- Published
- 1999
34. The Practice and Theory of Automated Timetabling (2012)
- Author
-
Dag Kjenstad, Ender Özcan, Barry McCollum, Edmund K. Burke, and Atle Riise
- Subjects
Annals ,Political science ,General Decision Sciences ,Library science ,Review process ,Management Science and Operations Research ,Northern ireland ,Algorithm - Abstract
This special volume is focused upon a selection of the papers that were presented at the 9th International Conference on the Practice and Theory of Automated Timetabling (PATAT) in Son, Norway, between 28th and 31st August, 2012. The PATAT conferences are held biennially. This is the third occasion that we have employed the Annals of Operations Research as the forum for a revised and selected collection of papers. The first and second PATAT special volumes of this journal (Volumes 194 and 218) represented the 7th and 8th conferences, held in Montreal in 2008 and in Belfast in 2010, respectively. The PATAT series has been running since 1995. It has been held in Scotland, Canada (twice), Germany, Belgium, USA, Czech Republic, Northern Ireland, Norway, and England. The series acts as a bridge between disciplines and as a bridge between science and practice. Its focus is on all aspects of timetabling research and practice across personnel rostering, educational timetabling, sports scheduling and transportation timetabling. The conference in Son had over 90 delegates from all over the world. We had 4 plenary presentations, 49 standard talks, and 11 practitioner presentations. All the presenters were invited to submit revised papers to this special volume. Additionally, a public announcement was circulated
- Published
- 2015
35. [Untitled]
- Author
-
Bernard T. Han and Jack Cook
- Subjects
Mathematical optimization ,Random search ,Heuristic (computer science) ,Bin packing problem ,Simulated annealing ,General Decision Sciences ,Column generation ,Management Science and Operations Research ,Greedy algorithm ,Evaluation function ,Algorithm ,Time complexity ,Mathematics - Abstract
In this paper, a mathematical model and a solution algorithm are developed for solving a robot acquisition and cell formation problem (RACFP). Our model considers purchasing a proper mix of robots and assigning all given workstations to purchased robots such that each robot cell satisfies its workstations' resource demands while minimizing the total system (acquisition) cost. Specifically, each robot has two capacity constraints - available work envelope and effective machine time. RACFP is formulated as a multi-type two-dimensional bin packing problem, a pure 0-1 integer program which is known to be NP-hard. In this paper, a very efficient (polynomial time bound) heuristic algorithm is developed and implemented. The algorithm consists of two major stages. The first stage employs an LP-based bounding procedure to produce a tight solution bound, whereas the second stage repetitively invokes a random search heuristic using a greedy evaluation function. The algorithm is tested by solving 450 randomly generated problems based on realistic parameter values. Computational results show that the heuristic algorithm has outperformed algorithms using general optimization techniques such as Simulated Annealing and Column Generation. All test problems are solved within an order of magnitude of 10 seconds, with a gap of less than 1% from the optimum. More importantly, over 70% of all solutions are optimal (334 out of 450). The algorithm can be easily modified for other applications such as file placement for a multi-device storage system and job scheduling for a multi-processing system.
- Published
- 1998
36. [Untitled]
- Author
-
Robert Pavur
- Subjects
Mathematical optimization ,Multiple discriminant analysis ,Function space ,General Decision Sciences ,Management Science and Operations Research ,Linear discriminant analysis ,Linear-fractional programming ,ComputingMethodologies_PATTERNRECOGNITION ,Discriminant function analysis ,Discriminant ,Optimal discriminant analysis ,Kernel Fisher discriminant analysis ,Algorithm ,Mathematics - Abstract
This paper proposes a new mathematical programming approach to represent the dimensions of the discriminant space for the multiple-group classification problem. Few papers have investigated generalizations of two-group mathematical programming approaches for the classification of multiple groups. While several papers have proposed mathematical programming models for separating groups of observations, the issue of considering the classification problem by finding discriminant linear functions to describe the groups in fewer dimensions has not been addressed. The new mathematical programming approach proposed in this paper first solves the multiple-group problem using a single discriminant function, which essentially represents the separation of the groups in one dimension. Then the multiple-group problem is successively solved using single discriminant functions with the requirement that successive linear discriminant functions have a sample covariance equal to zero. An algorithm is proposed to classify observations from multiple groups using the linear discriminant functions from the mathematical programming approach in a reduced number of dimensions.
- Published
- 1997
37. An algorithm for the construction of convex hulls in simple integer recourse programming
- Author
-
Leen Stougie, M.H. van der Vlerk, W.K. Klein Haneveld, Faculty of Economics and Business, Research programme OPERA, Econometrics and Operations Research, and ASE RI (FEB)
- Subjects
Convex analysis ,Convex hull ,Mathematical optimization ,Convex set ,Proper convex function ,General Decision Sciences ,Management Science and Operations Research ,Convex polytope ,Convex combination ,Output-sensitive algorithm ,SDG 7 - Affordable and Clean Energy ,Convex conjugate ,Algorithm ,Mathematics - Abstract
We consider the objective function of a simple integer recourse problem with fixed technology matrix and discretely distributed right-hand sides. Exploiting the special structure of this problem, we devise an algorithm that determines the convex hull of this function efficiently. The results are improvements over those in a previous paper. In the first place, the convex hull of many objective functions in the class is covered, instead of only one-dimensional versions. In the second place, the algorithm is faster than the one in the previous paper. Moreover, some new results on the structure of the objective function are presented.
- Published
- 1996
38. A tabu search algorithm to solve a green logistics bi-objective bi-level problem
- Author
-
José Luis González-Velarde, Lilian López-Vera, Alice E. Smith, and José-Fernando Camacho-Vallejo
- Subjects
Profit (accounting) ,Computer science ,Heuristic (computer science) ,Profit maximization ,Supply chain ,Pareto principle ,General Decision Sciences ,Green logistics ,Maximization ,Management Science and Operations Research ,Algorithm ,Tabu search - Abstract
This paper addresses a supply chain situation, in which a company distributes commodities over a selected subset of customers while a manufacturer produces the commodities demanded by the customers. The distributor company has two objectives: the maximization of the profit gained by the distribution process and the minimization of $${\textit{CO}}_2$$ emissions. The latter is important due to the regulations imposed by the government. A compromise between both objectives exists, since profit maximization only will attempt to include as many customers as possible. But, longer routes will be needed, causing more $${\textit{CO}}_2$$ emissions. The manufacturer aims to minimize its manufacturing and shipping costs. Since a predefined hierarchy between both companies exists in the supply chain, a bi-level programming approach is employed. This problem is modelled as a bi-level programming problem with two objectives in the upper level and a single objective in the lower level. The upper level is associated with the distributor, while the lower level is associated with the manufacturer. Due to the inherent complexity to optimally solve this problem, a heuristic scheme is proposed. A nested bi-objective tabu search algorithm is designed to obtain non-dominated bi-level feasible solutions regarding the upper level. Considering simultaneously both objectives of the distributor allow us to focus on the minimization of $${\textit{CO}}_2$$ emissions caused by the supply chain, but bearing in mind the distributor’s profit. Numerical experimentation shows that the Pareto frontiers obtained by the proposed algorithm provide good alternatives for the decision-making process and also, some managerial insights are given.
- Published
- 2021
39. Solving the shift and break design problem using integer linear programming
- Author
-
Marc Uetz, Arjan Akkermans, Gerhard F. Post, and Mathematics of Operations Research
- Subjects
021103 operations research ,Current (mathematics) ,Computer science ,Heuristic ,0211 other engineering and technologies ,Phase (waves) ,UT-Hybrid-D ,General Decision Sciences ,02 engineering and technology ,Management Science and Operations Research ,Set (abstract data type) ,Break scheduling ,Shift design ,Theory of computation ,0202 electrical engineering, electronic engineering, information engineering ,Benchmark (computing) ,020201 artificial intelligence & image processing ,Integer programming ,Algorithm ,Timetabling - Abstract
In this paper we propose a two-phase approach to solve the shift and break design problem using integer linear programming. In the first phase we create the shifts, while heuristically taking the breaks into account. In the second phase we assign breaks to each occurrence of any shift, one by one, repeating this until no improvement is found. On a set of benchmark instances, composed by both randomly-generated and real-life ones, this approach obtains better results than the current best known method for shift and break design problem.
- Published
- 2021
40. Bursty traffic modeling and efficient analysis algorithms via fluid-flow models for ATM IBCN
- Author
-
Nikolas Mitrou and Kimon Kontovasilis
- Subjects
Computation ,Stability (learning theory) ,General Decision Sciences ,Management Science and Operations Research ,Reduction (complexity) ,Nonlinear system ,symbols.namesake ,Theory of computation ,Convergence (routing) ,symbols ,Algorithm ,Newton's method ,Eigenvalues and eigenvectors ,Mathematics - Abstract
In this paper fluid models for heterogeneous multiplexed traffic are considered. First, some extensions to the general theory applicable to superposed, time-reversible Markovian Rate Processes are given. These refer to the connection between performance metrics, the consideration for singular systems and the continuity of the solution, with respect to the system parameters. The general framework is then carried over to the heterogeneous multiplexing of ON/OFF sources. By combining the general theory with the special structure of the ON/OFF sources several important facets of this structure are highlighted. As a result, more powerful methods that improve computation speed, stability and ease of implementation are produced. More specifically, the numerical part of the method is reduced to a solution of a nonlinear equation per system eigenvalue. The solution is obtainable through a variant of the (locally quadratically convergent) Newton method. For this method, easily computable starting values that guarantee convergence are given. In addition, explicit expressions for the eigenvectors are provided with the potentially unstable quantities factored-out. The paper also provides explicit and stably computable formulae for upper bounds to the coefficients of the spectral components, present in the expressions for the performance measures of interest. Moreover, the paper proves a partial ordering property for the system eigenvalues and presents an algorithm that performs full ordering on-line. This, in many cases, results in a great reduction to the amount of computation, without any significant loss of precision. Lastly, the particular case of heterogeneity where the differences are only identified in the rates within bursts is seen to have features resembling homogeneous systems. The possibility to substitute an “equivalent” homogeneous system of reduced order, for the original heterogeneous one is addressed.
- Published
- 1994
41. Uniform random number generation
- Author
-
Pierre L'Ecuyer
- Subjects
Matrix (mathematics) ,Computer science ,Random number generation ,Robustness (computer science) ,Theory of computation ,General Decision Sciences ,Management Science and Operations Research ,TestU01 ,Algorithm ,Randomness ,Statistical hypothesis testing ,Shift register - Abstract
In typical stochastic simulations, randomness is produced by generating a sequence of independent uniform variates (usually real-valued between 0 and 1, or integer-valued in some interval) and transforming them in an appropriate way. In this paper, we examine practical ways of generating (deterministic approximations to) such uniform variates on a computer. We compare them in terms of ease of implementation, efficiency, theoretical support, and statistical robustness. We look in particular at several classes of generators, such as linear congruential, multiple recursive, digital multistep, Tausworthe, lagged-Fibonacci, generalized feedback shift register, matrix, linear congruential over fields of formal series, and combined generators, and show how all of them can be analyzed in terms of their lattice structure. We also mention other classes of generators, like non-linear generators, discuss other kinds of theoretical and empirical statistical tests, and give a bibliographic survey of recent papers on the subject.
- Published
- 1994
42. Computation of mean-semivariance efficient sets by the Critical Line Algorithm
- Author
-
Harry M. Markowitz, Yuji Yamane, Ganlin Xu, and Peter Todd
- Subjects
Mathematical optimization ,Optimization problem ,Critical line ,Computation ,Semivariance ,Theory of computation ,General Decision Sciences ,Efficient frontier ,Portfolio optimization problem ,Management Science and Operations Research ,Algorithm ,Mathematics ,Parametric statistics - Abstract
The general mean-semivariance portfolio optimization problem seeks to determine the efficient frontier by solving a parametric non-quadratic programming problem. In this paper it is shown how to transform this problem into a general mean-variance optimization problem, hence the Critical Line Algorithm is applicable. This paper also discusses how to implement the critical line algorithm to save storage and reduce execution time.
- Published
- 1993
43. Algorithms for large scale set covering problems
- Author
-
Nicos Christofides and José M. P. Paixão
- Subjects
Mathematical optimization ,Linear programming ,Heuristic (computer science) ,General Decision Sciences ,Covering problems ,Set cover problem ,Management Science and Operations Research ,symbols.namesake ,Lagrangian relaxation ,symbols ,State space ,Relaxation (approximation) ,Row ,Algorithm ,Mathematics - Abstract
This paper is concerned with the set covering problem (SCP), and in particular with the development of a new algorithm capable of solving large-scale SCPs of the size found in real-life situations. The set covering problem has a wide variety of practical applications which, in general, yield large and sparse instances, normally with hundreds of rows and thousands of columns. In this paper, we present an algorithm capable of solving problems of this size and test problems up to 400 rows and 4000 columns are solved. The method developed in this paper consists of a tree-search procedure based on a combination of decomposition and state space relaxation which is a technique developed for obtaining lower bounds on the dynamic program associated with a combinatorial optimization problem. The large size SCPs are decomposed into many smaller SCPs which are then solved or bounded by state space relaxation (SSR). Before using the decomposition and SSR, reductions both in the number of columns and the number of rows of the problem are made by applying Lagrangian relaxation, linear programming and heuristic methods.
- Published
- 1993
44. The practice and theory of automated timetabling
- Author
-
Barry McCollum and Edmund K. Burke
- Subjects
Engineering ,business.industry ,General Decision Sciences ,Library science ,Review process ,Management Science and Operations Research ,business ,Algorithm - Abstract
This special volumecomprises revisedversions of a selectionof the papers thatwere presented at the 9th International Conference on the Practice and Theory of Automated Timetabling (PATAT) in Belfast between 10th and 13th August 2010. The PATAT conferences are held biennially and this is the second time that the Annals of Operations Research has provided the venue for such a special collection of papers. The first PATAT special volume of this journal (Volume 194) contains papers associated with the 8th conference which was held in Montreal in 2008. PATAT acts as an international forum for all aspects of timetabling, including educational timetabling, personnel rostering, sports timetabling, and transport scheduling. The conference series is particularly concerned both with closing the gap between timetabling theory and practice and with supporting multidisciplinary interactions. The collection of papers in this special volume reflect these aims. The conference in Belfast brought together approximately 100 participants from around the world. There were five plenary presentations, 74 standard talks, and 16 practitioner presentations. All the delegates were invited to submit their revised papers to this special volume. The papers have been through a rigorous and thorough review process, and we are delighted to be able to present the community with such an interesting and diverse selection of articles that reflect the latest thinking in timetabling research. We would like to take this opportunity to thank all those who were responsible for the success of the conference. We would particularly like to thank Brian Fleming and all those within the School of Electronics, Electrical Engineering andComputer Science at theQueen’s University of Belfast who worked so hard before and during the conference.
- Published
- 2014
45. Monte Carlo (importance) sampling within a benders decomposition algorithm for stochastic linear programs
- Author
-
Gerd Infanger
- Subjects
Mathematical optimization ,Linear programming ,Stochastic process ,Numerical analysis ,Theory of computation ,Monte Carlo method ,General Decision Sciences ,Sampling (statistics) ,Management Science and Operations Research ,Project portfolio management ,Algorithm ,Importance sampling ,Mathematics - Abstract
This paper focuses on Benders decomposition techniques and Monte Carlo sampling (importance sampling) for solving two-stage stochastic linear programs with recourse, a method first introduced by Dantzig and Glynn [7]. The algorithm is discussed and further developed. The paper gives a complete presentation of the method as it is currently implemented. Numerical results from test problems of different areas are presented. Using small test problems, we compare the solutions obtained by the algorithm with universe solutions. We present the solutions of large-scale problems with numerous stochastic parameters, which in the deterministic formulation would have billions of constraints. The problems concern expansion planning of electric utilities with uncertainty in the availabilities of generators and transmission lines and portfolio management with uncertainty in the future returns.
- Published
- 1992
46. An algorithm for the mixed-integer nonlinear bilevel programming problem
- Author
-
Thomas A. Edmunds and Jonathan F. Bard
- Subjects
Set (abstract data type) ,Mathematical optimization ,Nonlinear system ,Quadratic equation ,Branch and bound ,Theory of computation ,General Decision Sciences ,Management Science and Operations Research ,Algorithm ,Bilevel optimization ,Nonlinear programming ,Integer (computer science) ,Mathematics - Abstract
The bilevel programming problem (BLPP) is a two-person nonzero sum game in which play is sequential and cooperation is not permitted. In this paper, we examine a class of BLPPs where the leader controls a set of continuous and discrete variables and tries to minimize a convex nonlinear objective function. The follower's objective function is a convex quadratic in a continuous decision space. All constraints are assumed to be linear. A branch and bound algorithm is developed that finds global optima. The main purpose of this paper is to identify efficient branching rules, and to determine the computational burden of the numeric procedures. Extensive test results are reported. We close by showing that it is not readily possible to extend the algorithm to the more general case involving integer follower variables.
- Published
- 1992
47. A clustering approach to the planar hub location problem
- Author
-
Morton E. O'Kelly
- Subjects
Set (abstract data type) ,Mathematical optimization ,Flow (mathematics) ,Theory of computation ,Cluster (physics) ,General Decision Sciences ,Centroid ,Management Science and Operations Research ,System of linear equations ,Supercomputer ,Cluster analysis ,Algorithm ,Mathematics - Abstract
The problem tackled in this paper is as follows: consider a set ofn interacting points in a two-dimensional space. The levels of interactions between the observations are given exogenously. It is required to cluster then observations intop groups, so that the sum of squared deviations from the cluster means is as small as possible. Further, assume that the cluster means are adjusted to reflect the interaction between the entities. (It is this latter consideration which makes the problem interesting.) A useful property of the problem is that the use of a squared distance term yields a linear system of equations for the coordinates of the cluster centroids. These equations are derived and solved repeatedly for a given set of cluster allocations. A sequential reallocation of the observations between the clusters is then performed. One possible application of this problem is to the planar hub location problem, where the interacting observations are a system of cities and the interaction effects represent the levels of flow or movement between the entities. The planar hub location problem has been limited so far to problems with fewer than 100 nodes. The use of the squared distance formulation, and a powerful supercomputer (Cray Y-MP) has enabled quick solution of large systems with 250 points and four groups. The paper includes both small illustrative examples and computational results using systems with up to 500 observations and 9 clusters.
- Published
- 1992
48. A novel variable neighborhood strategy adaptive search for SALBP-2 problem with a limit on the number of machine’s types
- Author
-
Rapeepan Pitakaso, Kanchana Sethanan, Paulina Golinska-Dawson, and Ganokgarn Jirasirilerd
- Subjects
Set (abstract data type) ,Variable (computer science) ,021103 operations research ,Computer science ,Theory of computation ,0211 other engineering and technologies ,Neighbourhood (graph theory) ,General Decision Sciences ,Beam search ,02 engineering and technology ,Limit (mathematics) ,Management Science and Operations Research ,Algorithm - Abstract
This paper presents the novel method variable neighbourhood strategy adaptive search (VaNSAS) for solving the special case of assembly line balancing problems type 2 (SALBP-2S), which considers a limitation of a multi-skill worker. The objective is to minimize the cycle time while considering the limited number of types of machine in a particular workstation. VaNSAS is composed of two steps, as follows: (1) generating a set of tracks and (2) performing the track touring process (TTP). During TTP the tracks select and use a black box with neighborhood strategy in order to improve the solution obtained from step (1). Three modified neighborhood strategies are designed to be used as the black boxes: (1) modified differential evolution algorithm (MDE), (2) large neighborhood search (LNS) and (3) shortest processing time-swap (SPT-SWAP). The proposed method has been tested with two datasets which are (1) 128 standard test instances of SALBP-2 and (2) 21 random datasets of SALBP-2S. The computational result of the first dataset show that VaNSAS outperforms the best known method (iterative beam search (IBS)) and all other standard methods. VaNSAS can find 98.4% optimal solution out of all test instances while IBS can find 95.3% optimal solution. MDE, LNS and SPT-SWAP can find optimal solutions at 85.9%, 83.6% and 82.8% respectively. In the second group of test instances, we found that VaNSAS can find 100% of the minimum solution among all methods while MDE, LNS and SPT-SWAP can find 76.19%, 61.90% and 52.38% of the minimum solution.
- Published
- 2021
49. A Lagrangian relaxation algorithm for optimizing a bi-objective agro-supply chain model considering CO2 emissions
- Author
-
Seyed Hamid Reza Pasandideh and Fatemeh Keshavarz-Ghorbani
- Subjects
021103 operations research ,Linear programming ,Total cost ,Computer science ,0211 other engineering and technologies ,General Decision Sciences ,Robust optimization ,Context (language use) ,02 engineering and technology ,Management Science and Operations Research ,Purchasing ,symbols.namesake ,Lagrangian relaxation ,Perishability ,symbols ,Algorithm - Abstract
In this research, an agro-supply chain in the context of both economic and environmental issues has been investigated. To this end, a bi-objective model is formulated as a mixed-integer linear programming that aims to minimize the total costs and CO2 emissions. It generates the integration between purchasing, transporting, and storing decisions, considering specific characteristics of agro-products such as seasonality, perishability, and uncertainty. This study provides a different set of temperature conditions for preserving products from spoilage. In addition, a robust optimization approach is used to tackle the uncertainty in this paper. Then, $$\varepsilon$$ -constraint method is used to convert the bi-objective model to a single one. To solve the problem, Lagrangian relaxation algorithm is applied as an efficient approach giving lower bounds for the original problem and used for estimating upper bounds. At the end, a real case study is presented to give valuable insight via assessing the impacts of uncertainty in system costs.
- Published
- 2021
50. The CoMirror algorithm with random constraint sampling for convex semi-infinite programming
- Author
-
Sixiang Zhao, William B. Haskell, and Bo Wei
- Subjects
Constraint (information theory) ,Rate of convergence ,Convex optimization ,Theory of computation ,General Decision Sciences ,Function (mathematics) ,Management Science and Operations Research ,Convex function ,Subgradient method ,Algorithm ,Semi-infinite programming ,Mathematics - Abstract
The CoMirror algorithm, by Beck et al. (Oper Res Lett 38(6):493–498, 2010), is designed to solve convex optimization problems with one functional constraint. At each iteration, it performs a mirror-descent update using either the subgradient of the objective function or the subgradient of the constraint function, depending on whether or not the constraint violation is below some tolerance. In this paper, we combine the CoMirror algorithm with inexact cut generation to create the SIP-CoM algorithm for solving semi-infinite programming (SIP) problems. First, we provide general error bounds for SIP-CoM. Then, we propose two specific random constraint sampling schemes to approximately solve the cut generation problem for generic SIP. When the objective and constraint functions are generally convex, randomized SIP-CoM achieves an $${\mathcal {O}}(1/\sqrt{N})$$ convergence rate in expectation (in terms of the optimality gap and SIP constraint violation). When the objective and constraint functions are all strongly convex, this rate can be improved to $${\mathcal {O}}(1/N)$$ .
- Published
- 2020
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.