418 results
Search Results
2. Machine learning for combinatorial optimization: A methodological tour d’horizon
- Author
-
Antoine Prouvost, Yoshua Bengio, and Andrea Lodi
- Subjects
050210 logistics & transportation ,021103 operations research ,Information Systems and Management ,Optimization problem ,General Computer Science ,Branch and bound ,Point (typography) ,Computer science ,business.industry ,05 social sciences ,0211 other engineering and technologies ,02 engineering and technology ,Management Science and Operations Research ,Machine learning ,computer.software_genre ,Industrial and Manufacturing Engineering ,Task (project management) ,Modeling and Simulation ,0502 economics and business ,Combinatorial optimization ,Natural (music) ,Artificial intelligence ,Heuristics ,business ,computer - Abstract
This paper surveys the recent attempts, both from the machine learning and operations research communities, at leveraging machine learning to solve combinatorial optimization problems. Given the hard nature of these problems, state-of-the-art algorithms rely on handcrafted heuristics for making decisions that are otherwise too expensive to compute or mathematically not well defined. Thus, machine learning looks like a natural candidate to make such decisions in a more principled and optimized way. We advocate for pushing further the integration of machine learning and combinatorial optimization and detail a methodology to do so. A main point of the paper is seeing generic optimization problems as data points and inquiring what is the relevant distribution of problems to use for learning on a given task.
- Published
- 2021
3. Recent advances in selection hyper-heuristics
- Author
-
Edmund K. Burke, John H. Drake, Ender Özcan, and Ahmed Kheiri
- Subjects
050210 logistics & transportation ,Class (computer programming) ,021103 operations research ,Information Systems and Management ,General Computer Science ,business.industry ,Heuristic ,Computer science ,05 social sciences ,0211 other engineering and technologies ,02 engineering and technology ,Management Science and Operations Research ,Machine learning ,computer.software_genre ,Industrial and Manufacturing Engineering ,Term (time) ,Modeling and Simulation ,Problem domain ,0502 economics and business ,Artificial intelligence ,Heuristics ,business ,Set (psychology) ,computer ,Selection (genetic algorithm) - Abstract
Hyper-heuristics have emerged as a way to raise the level of generality of search techniques for computational search problems. This is in contrast to many approaches, which represent customised methods for a single problem domain or a narrow class of problem instances. The term hyper-heuristic was defined in the early 2000s as a heuristic to choose heuristics, but the idea of designing high-level heuristic methodologies can be traced back to the early 1960s. The current state-of-the-art in hyper-heuristic research comprises a set of methods that are broadly concerned with intelligently selecting or generating a suitable heuristic for a given situation. Hyper-heuristics can be considered as search methods that operate on lower-level heuristics or heuristic components, and can be categorised into two main classes: heuristic selection and heuristic generation. Here we will focus on the first of these two categories, selection hyper-heuristics. This paper gives a brief history of this emerging area, reviews contemporary selection hyper-heuristic literature, and discusses recent selection hyper-heuristic frameworks. In addition, the existing classification of selection hyper-heuristics is extended, in order to reflect the nature of the challenges faced in contemporary research. Unlike the survey on hyper-heuristics published in 2013, this paper focuses only on selection hyper-heuristics and presents critical discussion, current research trends and directions for future research.
- Published
- 2020
4. Determining the fuzzy measures in multiple criteria decision aiding from the tolerance perspective
- Author
-
Dengsheng Wu, Xiaoyang Yao, Xiaolei Sun, and Jianping Li
- Subjects
021103 operations research ,Information Systems and Management ,General Computer Science ,Fuzzy measure theory ,Process (engineering) ,business.industry ,0211 other engineering and technologies ,02 engineering and technology ,Management Science and Operations Research ,Machine learning ,computer.software_genre ,Multiple-criteria decision analysis ,Fuzzy logic ,Industrial and Manufacturing Engineering ,Preference ,Choquet integral ,Modeling and Simulation ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Pairwise comparison ,Artificial intelligence ,business ,Additive model ,computer ,Mathematics - Abstract
We consider multiple criteria decision aiding (MCDA) in the case of interactions between criteria. In dealing with interactions between criteria, fuzzy measures and integrals have demonstrated great advantages. Nevertheless, the determination of fuzzy measures has proven difficult because the capacities of not only single criterion but also all subsets of criteria need to be identified. Due to the value judgment essence of MCDA, the attitudes of the decision maker (DM) are typically modeled to identify fuzzy measures. In this paper, the tolerance attitudes of the DM, which implies a direct requirement instead of partial preference, are modeled with regard to the determination of fuzzy measures for the first time. With two scales developed in this paper, the DM can directly express the tolerance attitudes to certain criteria other than providing partial preference through pairwise comparison. As a result, it requires less prior knowledge and is more efficient to some extent. Further, the inherent interacting mechanism of criteria under different tolerance attitudes is explored. At last, the tolerance attitudes are applied to the process of multiple criteria analysis using a Choquet integral. A classic student evaluation problem is given as an example. The evaluation results are compared with additive models. This paper not only provides a new inspiration to the determination of fuzzy measures but also improves the descriptive capacity of fuzzy measures to the real world.
- Published
- 2018
5. An experimental comparison of seriation methods for one-mode two-way data
- Author
-
Michael Hahsler
- Subjects
021103 operations research ,Information Systems and Management ,General Computer Science ,business.industry ,Computer science ,Matrix (music) ,0211 other engineering and technologies ,Mode (statistics) ,02 engineering and technology ,Management Science and Operations Research ,computer.software_genre ,Machine learning ,Industrial and Manufacturing Engineering ,Set (abstract data type) ,Analytics ,Modeling and Simulation ,Seriation (semiotics) ,0202 electrical engineering, electronic engineering, information engineering ,Combinatorial optimization ,020201 artificial intelligence & image processing ,Artificial intelligence ,Data mining ,business ,Heuristics ,computer - Abstract
Seriation aims at finding a linear order for a set of objects to reveal structural information which can be used for deriving data-driven decisions. It presents a difficult combinatorial optimization problem with its roots and applications in many fields including operations research. This paper focuses on a popular seriation problem which tries to find an order for a single set of objects that optimizes a given seriation criterion defined on one-mode two-way data, i.e., an object-by-object dissimilarity matrix. Over the years, members of different research communities have introduced many criteria and seriation methods for this problem. It is often not clear how different seriation criteria and methods relate to each other and which criterion or seriation method to use for a given application. These methods are representing tools for analytics and therefore are of theoretical and practical interest to the operations research community. The purpose of this paper is to provide a consistent overview of the most popular criteria and seriation methods and to present a comprehensive experimental study to compare their performance using artificial and a representative set of real-world datasets.
- Published
- 2017
6. Hypothesis testing for means in connection with fuzzy rating scale-based data: algorithms and applications
- Author
-
Beatriz Sinova, María Ángeles Gil, María Asunción Lubiano, Sara de la Rosa de Sáa, and Manuel Montenegro
- Subjects
Information Systems and Management ,Fuzzy classification ,General Computer Science ,02 engineering and technology ,Management Science and Operations Research ,Machine learning ,computer.software_genre ,Type-2 fuzzy sets and systems ,01 natural sciences ,Defuzzification ,Fuzzy logic ,Industrial and Manufacturing Engineering ,010104 statistics & probability ,0202 electrical engineering, electronic engineering, information engineering ,Fuzzy number ,0101 mathematics ,Mathematics ,Fuzzy measure theory ,business.industry ,Modeling and Simulation ,Fuzzy set operations ,020201 artificial intelligence & image processing ,Artificial intelligence ,Data mining ,business ,computer ,Membership function - Abstract
The fuzzy rating scale was introduced as a tool to measure intrinsically ill-defined/ imprecisely-valued attributes in a free way. Thus, users do not have to choose a value from a class of prefixed ones (like it happens when a fuzzy semantic representation of a linguistic term set is considered), but just to draw the fuzzy number that better represents their valuation or measurement. The freedom inherent to the fuzzy rating scale process allows users to collect data with a high level of richness, accuracy, expressiveness, diversity and subjectivity, what is especially valuable for statistical purposes. This paper presents an inferential approach to analyze data obtained by using the fuzzy rating scale. More concretely, the paper is to be focused on testing different hypothesis about means, on the basis of a sound methodology which has been stated during the last years. All the procedures that have been developed to this aim will be presented in an algorithmic way adapted to the usual generic fuzzy rating scale-based data, and they will be illustrated by means of a real-life example.
- Published
- 2016
7. Integration of electromagnetism with multi-objective evolutionary algorithms for RCPSP
- Author
-
Zhou Wu, Yong Tang, Xi-Xi Hong, Jing Xiao, and Jian-Chao Tang
- Subjects
Mathematical optimization ,Information Systems and Management ,General Computer Science ,Computer science ,Tardiness ,Population ,0211 other engineering and technologies ,Evolutionary algorithm ,ComputerApplications_COMPUTERSINOTHERSYSTEMS ,02 engineering and technology ,Management Science and Operations Research ,Industrial and Manufacturing Engineering ,Scheduling (computing) ,Search algorithm ,0202 electrical engineering, electronic engineering, information engineering ,education ,education.field_of_study ,021103 operations research ,Job shop scheduling ,business.industry ,Pareto principle ,Schedule (project management) ,Modeling and Simulation ,020201 artificial intelligence & image processing ,Artificial intelligence ,Heuristics ,business ,Evolutionary programming - Abstract
As one of the most challenging combinatorial optimization problems in scheduling, the resource-constrained project scheduling problem (RCPSP) has attracted numerous scholars’ interest resulting in considerable research in the past few decades. However, most of these papers focused on the single objective RCPSP; only a few papers concentrated on the multi-objective resource-constrained project scheduling problems (MORCPSP). Inspired by a procedure called electromagnetism (EM), which can help a generic population-based evolutionary search algorithm to obtain good results for single objective RCPSP, in this paper we attempt to extend EM and integrate it into three reputable state-of-the-art multi-objective evolutionary algorithms (MOEAs) i.e. non-dominated sorting based multi-objective evolutionary algorithm (NSGA-II), strength Pareto evolutionary algorithm (SPEA2) and multi-objective evolutionary algorithm based on decomposition (MOEA/D), for MORCPSP. We aim to optimize makespan and total tardiness. Empirical analysis based on standard benchmark datasets are conducted by comparing the versions of integrating EM to NSGA-II, SPEA2 and MOEA/D with the original algorithms without EM. The results demonstrate that EM can improve the performance of NSGA-II and SPEA2, especially for NSGA-II.
- Published
- 2016
8. Switching regression metamodels in stochastic simulation
- Author
-
M. Isabel Reis dos Santos and Pedro M. Reis dos Santos
- Subjects
Mathematical optimization ,Information Systems and Management ,General Computer Science ,Computer science ,Maximum likelihood ,0211 other engineering and technologies ,Asymptotic distribution ,02 engineering and technology ,Management Science and Operations Research ,Machine learning ,computer.software_genre ,01 natural sciences ,Least squares ,Industrial and Manufacturing Engineering ,010104 statistics & probability ,Consistency (statistics) ,Stochastic simulation ,Linear regression ,0101 mathematics ,021103 operations research ,business.industry ,Emphasis (telecommunications) ,Regression ,Statistics::Computation ,Metamodeling ,Modeling and Simulation ,Artificial intelligence ,business ,computer - Abstract
Simulation models are frequently analyzed through a linear regression model that relates the input/output data behavior. However, in several situations, it happens that different data subsets may resemble different models. The purpose of this paper is to present a procedure for constructing switching regression metamodels in stochastic simulation, and to exemplify the practical use of statistical techniques of switching regression in the analysis of simulation results. The metamodel estimation is made using a mixture weighted least squares and the maximum likelihood method. The consistency and the asymptotic normality of the maximum likelihood estimator are establish. The proposed methods are applied in the construction of a switching regression metamodel. This paper gives special emphasis on the usefulness of constructing switching metamodels in simulation analysis.
- Published
- 2016
9. Convergence properties and practical estimation of the probability of rank reversal in pairwise comparisons for multi-criteria decision making problems
- Author
-
Thomas Sphicopoulos, Thomas Kamalakis, and Georgia Dede
- Subjects
Rank reversals in decision-making ,Information Systems and Management ,General Computer Science ,business.industry ,Context (language use) ,Management Science and Operations Research ,Machine learning ,computer.software_genre ,Fuzzy logic ,Industrial and Manufacturing Engineering ,Outcome (probability) ,Modeling and Simulation ,Convergence (routing) ,Pairwise comparison ,Artificial intelligence ,business ,computer ,Preference (economics) ,Mathematics ,Decision analysis - Abstract
In this paper, we address the impact of uncertainty introduced when the experts complete pairwise comparison matrices, in the context of multi-criteria decision making. We first discuss how uncertainty can be quantified and modeled and then show how the probability of rank reversal scales with the number of experts. We consider the impact of various aspects which may affect the estimation of probability of rank reversal in the context of pairwise comparisons, such as the uncertainty level, alternative preference scales and different weight estimation methods. We also consider the case where the comparisons are carried out in a fuzzy manner. It is shown that in most circumstances, augmenting the size of the expert group beyond 15 produces a small change in the probability of rank reversal. We next address the issue of how this probability can be estimated in practice, from information gathered simply from the comparison matrices of a single expert group. We propose and validate a scheme which yields an estimate for the probability of rank reversal and test the applicability of this scheme under various conditions. The framework discussed in the paper can allow decision makers to correctly choose the number of experts participating in a pairwise comparison and obtain an estimate of the credibility of the outcome.
- Published
- 2015
10. Concept-cognitive computing system for dynamic classification
- Author
-
Zongrun Wang, Pei Quan, Yong Shi, and Yunlong Mi
- Subjects
Information Systems and Management ,General Computer Science ,business.industry ,Computer science ,Mechanism (biology) ,Process (engineering) ,Dynamic data ,Big data ,Perspective (graphical) ,Cognitive computing ,Context (language use) ,Management Science and Operations Research ,Space (commercial competition) ,Machine learning ,computer.software_genre ,Industrial and Manufacturing Engineering ,Modeling and Simulation ,Artificial intelligence ,business ,computer - Abstract
In the context of big data, organizations and individuals can often benefit from the data mining techniques, such as classification. However, decision-makers must quickly react to insights over time under dynamic environments. In this paper, we present a novel perspective, named concept-cognitive computing system (C3S), to achieve dynamic classification learning over the partially labeled data and labeled data. More specifically, to store and consolidate knowledge, a concept falling space is first employed as a basic knowledge memory mechanism in C3S. Then, we design a new concept-cognitive process by means of simulating human learning processes, which can incorporate new information into the old knowledge. Finally, a strategy of constructing two different concept spaces is considered in our system when faced with the scenario of a partially labeled dynamic data. Although there exist significant differences between C3S and the conventional incremental learning methods in the learning paradigm, our proposed C3S still performs strong performance for dynamic classification in comparison with several state-of-the-art incremental learning approaches. In addition, the experiments on various datasets have demonstrated that our system can obtain a good performance on the partially labeled data and labeled data simultaneously in dynamic environments.
- Published
- 2022
11. Separable solutions for Markov processes in random environments
- Author
-
Simonetta Balsamo and Andrea Marin
- Subjects
Information Systems and Management ,Markov kernel ,General Computer Science ,Markov chain ,business.industry ,Variable-order Markov model ,Markov process ,Management Science and Operations Research ,Machine learning ,computer.software_genre ,Markov model ,Industrial and Manufacturing Engineering ,Continuous-time Markov chain ,symbols.namesake ,Markov renewal process ,Modeling and Simulation ,symbols ,Applied mathematics ,Markov property ,Artificial intelligence ,business ,computer ,Mathematics - Abstract
In this paper we address the problem of efficiently deriving the steady-state distribution for a continuous time Markov chain (CTMC) S evolving in a random environment E. The process underlying E is also a CTMC. S is called Markov modulated process. Markov modulated processes have been widely studied in literature since they are applicable when an environment influences the behaviour of a system. For instance, this is the case of a wireless link, whose quality may depend on the state of some random factors such as the intensity of the noise in the environment. In this paper we study the class of Markov modulated processes which exhibits separable, product-form stationary distribution. We show that several models that have been proposed in literature can be studied applying the Extended Reversed Compound Agent Theorem (ERCAT), and also new product-forms are derived. We also address the problem of the necessity of ERCAT for product-forms and show a meaningful example of product-form not derivable via ERCAT.
- Published
- 2013
12. A taxonomy and review of the fuzzy data envelopment analysis literature: Two decades in the making
- Author
-
Ali Emrouznejad, Madjid Tavana, and Adel Hatami-Marbini
- Subjects
Information Systems and Management ,Fuzzy classification ,General Computer Science ,business.industry ,Fuzzy sets theory ,Fuzzy set ,Management Science and Operations Research ,Fuzzy data envelopment analysis ,computer.software_genre ,Machine learning ,Fuzzy logic ,Industrial and Manufacturing Engineering ,Set (abstract data type) ,Data envelopment analysis ,Modeling and Simulation ,Taxonomy (general) ,Fuzzy set operations ,Data mining ,Artificial intelligence ,business ,computer ,Mathematics - Abstract
Data envelopment analysis (DEA) is a methodology for measuring the relative efficiencies of a set of decision making units (DMUs) that use multiple inputs to produce multiple outputs. Crisp input and output data are fundamentally indispensable in conventional DEA. However, the observed values of the input and output data in real-world problems are sometimes imprecise or vague. Many researchers have proposed various fuzzy methods for dealing with the imprecise and ambiguous data in DEA. In this study, we provide a taxonomy and review of the fuzzy DEA methods. We present a classification scheme with four primary categories, namely, the tolerance approach, the α-level based approach, the fuzzy ranking approach and the possibility approach. We discuss each classification scheme and group the fuzzy DEA papers published in the literature over the past 20 years. To the best of our knowledge, this paper appears to be the only review and complete source of references on fuzzy DEA.
- Published
- 2011
13. Simulation metamodeling with dynamic Bayesian networks
- Author
-
Kai Virtanen and Jirka Poropudas
- Subjects
Computer Science::Machine Learning ,Information Systems and Management ,General Computer Science ,Computer science ,systems analysis laboratory research group ,Computer Science::Artificial Intelligence ,Management Science and Operations Research ,Machine learning ,computer.software_genre ,Industrial and Manufacturing Engineering ,Joint probability distribution ,Discrete event simulation ,ta512 ,Dynamic Bayesian network ,ta113 ,ta112 ,business.industry ,ta111 ,Time evolution ,Bayesian network ,Statistical model ,Conditional probability distribution ,Confidence interval ,Metamodeling ,Modeling and Simulation ,Probability distribution ,Artificial intelligence ,Data mining ,business ,computer ,Random variable - Abstract
This paper presents a novel approach to simulation metamodeling using dynamic Bayesian networks (DBNs) in the context of discrete event simulation. A DBN is a probabilistic model that represents the joint distribution of a sequence of random variables and enables the efficient calculation of their marginal and conditional distributions. In this paper, the construction of a DBN based on simulation data and its utilization in simulation analyses are presented. The DBN metamodel allows the study of the time evolution of simulation by tracking the probability distribution of the simulation state over the duration of the simulation. This feature is unprecedented among existing simulation metamodels. The DBN metamodel also enables effective what-if analysis which reveals the conditional evolution of the simulation. In such an analysis, the simulation state at a given time is fixed and the probability distributions representing the state at other time instants are updated. Simulation parameters can be included in the DBN metamodel as external random variables. Then, the DBN offers a way to study the effects of parameter values and their uncertainty on the evolution of the simulation. The accuracy of the analyses allowed by DBNs is studied by constructing appropriate confidence intervals. These analyses could be conducted based on raw simulation data but the use of DBNs reduces the duration of repetitive analyses and is expedited by available Bayesian network software. The construction and analysis capabilities of DBN metamodels are illustrated with two example simulation studies.
- Published
- 2011
14. A new approach to multi-criteria sorting based on fuzzy outranking relations: The THESEUS method
- Author
-
Jorge Navarro and Eduardo Fernandez
- Subjects
Information Systems and Management ,General Computer Science ,Relation (database) ,business.industry ,Sorting ,Management Science and Operations Research ,Object (computer science) ,Fuzzy logic ,Industrial and Manufacturing Engineering ,Set (abstract data type) ,Simple (abstract algebra) ,Modeling and Simulation ,Artificial intelligence ,Element (category theory) ,business ,Preference (economics) ,Mathematics - Abstract
In this paper, we propose the THESEUS method, a new approach based on fuzzy outranking relations to multi-criteria sorting problems. Compared with other outranking-based methods, THESEUS is inspired by another view of multi-criteria classification problems. It utilizes a new way of evaluating the assignment of an object to an element of the set of ordered categories that were previously defined. This way is based on comparing every possible assignment with the information from various preference relations that are derived from a fuzzy outranking relation defined on the universe of objects. The appropriate assignment is determined by solving a simple selection problem. The capacity of a reference set for making appropriate assignments is related to a good characterization of the categories. A single reference action characterizing a category may be insufficient to achieve well-determined assignments. In this paper, the reference set capacity to perform appropriate assignments is characterized by some new concepts. This capacity may be increased when more objects are added to the reference set. THESEUS is a method for handling the preference information contained in such larger reference sets.
- Published
- 2011
15. Acceptable set topic modeling
- Author
-
Lauren Berk Wheelock and Dessislava A. Pachamanova
- Subjects
Topic model ,Information Systems and Management ,Perplexity ,General Computer Science ,Computer science ,business.industry ,Robust optimization ,Inference ,Management Science and Operations Research ,Machine learning ,computer.software_genre ,Bayesian inference ,Industrial and Manufacturing Engineering ,symbols.namesake ,Local optimum ,Modeling and Simulation ,Metric (mathematics) ,symbols ,Artificial intelligence ,business ,computer ,Gibbs sampling - Abstract
Topic modeling is a significant branch of natural language processing and machine learning focused on inferring the generative process of text. Traditionally, algorithms for estimating topic models have relied on Bayesian inference and Gibbs sampling. This paper proposes a novel “acceptable set” framework for formulating topic modeling problems inspired by ideas from discrete component analysis and data-driven robust optimization. Our approach not only simplifies the design and inference of topic models, but also allows for extensions and generalizations that are challenging to integrate into traditional approaches. Different restrictions (e.g., sparsity) and assumptions (e.g., alternative generative processes) can be easily incorporated into our formulations through additional or modified constraints. Our formulations also naturally control a widely used metric of model quality, perplexity. We adapt state-of-the-art stochastic gradient methods to find good local optima for the optimization formulations. The algorithms are efficient, scaling to realistic problem sizes with runtimes comparable to existing methods. Through extensive computational experiments, we show that our methods have improved solution quality compared to baseline methods and reconstruct more reliably the underlying generative models. Our framework overcomes known vulnerabilities of traditional topic modeling algorithms: our methods are effective in low-data settings, register good out-of-sample performance, and perform well for a variety of initial assumptions on input parameter values.
- Published
- 2022
16. Voting: A machine learning approach
- Author
-
László Szepesváry, Clemens Puppe, Dávid Burka, and Attila Tasnádi
- Subjects
Information Systems and Management ,General Computer Science ,Artificial neural network ,Computer science ,business.industry ,Learnability ,media_common.quotation_subject ,Rank (computer programming) ,Sample (statistics) ,Management Science and Operations Research ,Condorcet method ,Machine learning ,computer.software_genre ,Industrial and Manufacturing Engineering ,Sample size determination ,Modeling and Simulation ,Voting ,Artificial intelligence ,business ,computer ,Axiom ,media_common - Abstract
Voting rules can be assessed from quite different perspectives: the axiomatic, the pragmatic, in terms of computational or conceptual simplicity, susceptibility to manipulation, and many others aspects. In this paper, we take the machine learning perspective and ask how prominent voting rules compare in terms of their learnability by a neural network. To address this question, we train the neural network to choosing Condorcet, Borda, and plurality winners, respectively. Remarkably, our statistical results show that, when trained on a limited (but still reasonably large) sample, the neural network mimics most closely the Borda rule, no matter on which rule it was previously trained. The main overall conclusion is that the necessary training sample size for a neural network varies significantly with the voting rule, and we rank a number of popular voting rules in terms of the sample size required.
- Published
- 2022
17. Can holistic declaration of preferences improve a negotiation offer scoring system?
- Author
-
Tomasz Wachowicz and Ewa Roszkowska
- Subjects
Decision support system ,Information Systems and Management ,Scoring system ,General Computer Science ,Basis (linear algebra) ,business.industry ,Computer science ,media_common.quotation_subject ,Declaration ,Management Science and Operations Research ,Machine learning ,computer.software_genre ,Industrial and Manufacturing Engineering ,Set (abstract data type) ,Negotiation ,Software ,Modeling and Simulation ,Preference elicitation ,Artificial intelligence ,business ,computer ,media_common - Abstract
In this paper, we analyse the problem of determining a negotiation offer scoring system using an alternative approach to the classic direct rating (DR). We examine the effectiveness of the prenegotiation preference elicitation on the basis of holistic judgments, supported by a software decision support tool. This approach is based on rank-ordering of examples of complete offers, which is then disaggregated using the Utilites Additives (UTA) method. In a series of studies, we analyse the accuracy of the scoring systems obtained from these approaches, as well as the negotiators’ subjective evaluation of their use and usefulness. The technical capability of the various setups of the UTA-based disaggregation models to produce accurate scoring systems is verified by simulation in Study 1. The empirical applicability of the most promising UTA-based models is studied in two experiments, in which the negotiators used examples of both predefined and self-declared sets of offers (Study 2), or applied an enhanced UTA algorithm (Study 3). The enhanced algorithm used a predefined set of offers, implemented certain elements of DR, and allowed for iterative improvements of the scoring system. The results show that the UTA-based approach works in a technical sense, but empirically its performance is worse than DR unless the set of example offers is predefined. The enhanced algorithm produced a better scoring system, but users' subjective evaluations were mixed.
- Published
- 2022
18. A model for real-time failure prognosis based on hidden Markov model and belief rule base
- Author
-
Dong-Ling Xu, Chang Hua Hu, Zhi Jie Zhou, Maoyin Chen, and Donghua Zhou
- Subjects
Hidden Markov model ,Information Systems and Management ,General Computer Science ,Computer science ,business.industry ,Process (engineering) ,System identification ,System safety ,Management Science and Operations Research ,Markov model ,computer.software_genre ,Machine learning ,Industrial and Manufacturing Engineering ,Expert system ,Failure prognosis ,Expert systems ,Modeling and Simulation ,Environmental factors ,Artificial intelligence ,business ,Belief rule base ,computer - Abstract
As one of most important aspects of condition-based maintenance (CBM), failure prognosis has attracted an increasing attention with the growing demand for higher operational efficiency and safety in industrial systems. Currently there are no effective methods which can predict a hidden failure of a system real-time when there exist influences from the changes of environmental factors and there is no such an accurate mathematical model for the system prognosis due to its intrinsic complexity and operating in potentially uncertain environment. Therefore, this paper focuses on developing a new hidden Markov model (HMM) based method which can deal with the problem. Although an accurate model between environmental factors and a failure process is difficult to obtain, some expert knowledge can be collected and represented by a belief rule base (BRB) which is an expert system in fact. As such, combining the HMM with the BRB, a new prognosis model is proposed to predict the hidden failure real-time even when there are influences from the changes of environmental factors. In the proposed model, the HMM is used to capture the relationships between the hidden failure and monitored observations of a system. The BRB is used to model the relationships between the environmental factors and the transition probabilities among the hidden states of the system including the hidden failure, which is the main contribution of this paper. Moreover, a recursive algorithm for online updating the prognosis model is developed. An experimental case study is examined to demonstrate the implementation and potential applications of the proposed real-time failure prognosis method. © 2010 Elsevier B.V. All rights reserved.
- Published
- 2010
19. Determining targets for multi-stage adaptive tests using integer programming
- Author
-
Mabel Tam Kung, Ronald D. Armstrong, and Louis A. Roussos
- Subjects
Mathematical optimization ,Information Systems and Management ,Forcing (recursion theory) ,General Computer Science ,business.industry ,Interval (mathematics) ,Function (mathematics) ,Management Science and Operations Research ,Machine learning ,computer.software_genre ,Industrial and Manufacturing Engineering ,Test (assessment) ,Modeling and Simulation ,Item response theory ,Computerized adaptive testing ,Artificial intelligence ,Special case ,business ,computer ,Integer programming ,Mathematics - Abstract
This paper considers a multi-stage adaptive test (MST) where the testlets at each stage are determined prior to the administration. The assembly of a MST requires target information and target response functions for the MST design. The targets are chosen to create tests with accurate scoring and high utilization of items in an operational pool. Forcing all MSTs to have information and response function plots to be within an interval about the targets will yield parallel MSTs, in the sense that standardized paper-and-pencil tests are considered parallel. The objective of this paper is to present a method to determine targets for the MST design based on an item pool and an assumed distribution of examinee ability. The approach is applied to a Skills Readiness Inventory test designed to identify logical reasoning deficiencies of examinees. This method can be applied to obtain item response theory targets for a linear test as this is a special case of a MST.
- Published
- 2010
20. Assessing bank efficiency and performance with operational research and artificial intelligence techniques: A survey
- Author
-
Meryem Duygun Fethi and Fotios Pasiouras
- Subjects
Decision support system ,Information Systems and Management ,General Computer Science ,Operations research ,Artificial neural network ,Computer science ,business.industry ,Management Science and Operations Research ,Key issues ,Industrial and Manufacturing Engineering ,Field (computer science) ,Support vector machine ,Modeling and Simulation ,Data envelopment analysis ,Artificial intelligence ,business ,Bank failure - Abstract
Summarization: This paper presents a comprehensive review of 196 studies which employ operational research (O.R.) and artificial intelligence (A.I.) techniques in the assessment of bank performance. Several key issues in the literature are highlighted. The paper also points to a number of directions for future research. We first discuss numerous applications of data envelopment analysis which is the most widely applied O.R. technique in the field. Then we discuss applications of other techniques such as neural networks, support vector machines, and multicriteria decision aid that have also been used in recent years, in bank failure prediction studies and the assessment of bank creditworthiness and underperformance. Presented on: European Journal of Operational Research
- Published
- 2010
21. A methodology for analyzing decision networks, based on information theory
- Author
-
Maryam Ehsani, Soheil Sadi Nezhad, and Ahmad Makui
- Subjects
Information Systems and Management ,General Computer Science ,Decision engineering ,business.industry ,Decision tree ,Evidential reasoning approach ,Decision rule ,Management Science and Operations Research ,Information theory ,computer.software_genre ,Industrial and Manufacturing Engineering ,Modeling and Simulation ,Business decision mapping ,Influence diagram ,Data mining ,Artificial intelligence ,business ,computer ,Decision analysis ,Mathematics - Abstract
This paper assumes the organization as a distributed decision network. It proposes an approach based on application and extension of information theory concepts, in order to analyze informational complexity in a decision network, due to interdependence between decision centers. Based on this approach, new quantitative concepts and definitions are proposed in order to measure the information in a decision center, based on Shannon entropy and its complement in possibility theory, U uncertainty. This approach also measures the quantity of interdependence between decision centers and informational complexity of decision networks. The paper presents an agent-based model of organization as a graph composed of decision centers. The application of the proposed approach is in analyzing and assessing a measure to the organization structure efficiency, based on informational communication view. The structure improvement, analysis of information flow in organization and grouping algorithms are investigated in this paper. The results obtained from this model in different systems as distributed decision networks, clarifies the importance of structure and information distribution sources effect’s on network efficiency.
- Published
- 2010
22. Fuzzy linear programming models for NPD using a four-phase QFD activity process based on the means-end chain concept
- Author
-
Liang-Hsuan Chen and Wen Chang Ko
- Subjects
Information Systems and Management ,General Computer Science ,Linear programming ,Process (engineering) ,business.industry ,Fuzzy set ,Management Science and Operations Research ,Decision problem ,Fuzzy logic ,Industrial engineering ,Industrial and Manufacturing Engineering ,Modeling and Simulation ,New product development ,Customer satisfaction ,Artificial intelligence ,business ,Quality function deployment ,Mathematics - Abstract
Quality function deployment (QFD) is a customer-driven approach in processing new product development (NPD) to maximize customer satisfaction. Determining the fulfillment levels of the “hows”, including design requirements (DRs), part characteristics (PCs), process parameters (PPs) and production requirements (PRs), is an important decision problem during the four-phase QFD activity process for new product development. Unlike previous studies, which have only focused on determining DRs, this paper considers the close link between the four phases using the means-end chain (MEC) concept to build up a set of fuzzy linear programming models to determine the contribution levels of each “how” for customer satisfaction. In addition, to tackle the risk problem in NPD processes, this paper incorporates risk analysis, which is treated as the constraint in the models, into the QFD process. To deal with the vague nature of product development processes, fuzzy approaches are used for both QFD and risk analysis. A numerical example is used to demonstrate the applicability of the proposed model.
- Published
- 2010
23. Hybridizing exact methods and metaheuristics: A taxonomy
- Author
-
Laetitia Jourdan, El-Ghazali Talbi, Matthieu Basseur, Parallel Cooperative Multi-criteria Optimization (DOLPHIN), Laboratoire d'Informatique Fondamentale de Lille (LIFL), Université de Lille, Sciences et Technologies-Institut National de Recherche en Informatique et en Automatique (Inria)-Université de Lille, Sciences Humaines et Sociales-Centre National de la Recherche Scientifique (CNRS)-Université de Lille, Sciences et Technologies-Institut National de Recherche en Informatique et en Automatique (Inria)-Université de Lille, Sciences Humaines et Sociales-Centre National de la Recherche Scientifique (CNRS)-Inria Lille - Nord Europe, and Institut National de Recherche en Informatique et en Automatique (Inria)
- Subjects
021103 operations research ,Information Systems and Management ,General Computer Science ,Computer science ,Heuristic ,business.industry ,0211 other engineering and technologies ,02 engineering and technology ,Management Science and Operations Research ,Industrial and Manufacturing Engineering ,Modeling and Simulation ,[MATH.MATH-CO]Mathematics [math]/Combinatorics [math.CO] ,0202 electrical engineering, electronic engineering, information engineering ,Combinatorial optimization ,020201 artificial intelligence & image processing ,Artificial intelligence ,Heuristics ,business ,Metaheuristic ,ComputingMilieux_MISCELLANEOUS - Abstract
The interest about hybrid optimization methods has grown for the last few years. Indeed, more and more papers about cooperation between heuristics and exact techniques are published. In this paper, we propose to extend an existing taxonomy for hybrid methods involving heuristic approaches in order to consider cooperative schemes between exact methods and metaheuristics. First, we propose some natural approaches for the different schemes of cooperation encountered, and we analyse, for each model, some examples taken from the literature. Then we recall and complement the proposed grammar and provide an annotated bibliography.
- Published
- 2009
24. Ant colony optimization with a specialized pheromone trail for the car-sequencing problem
- Author
-
Caroline Gagné, Sara Morin, and Marc Gravel
- Subjects
Information Systems and Management ,General Computer Science ,business.industry ,Computer science ,Ant colony optimization algorithms ,Particle swarm optimization ,Management Science and Operations Research ,ComputingMethodologies_ARTIFICIALINTELLIGENCE ,Swarm intelligence ,Industrial and Manufacturing Engineering ,Scheduling (computing) ,Modeling and Simulation ,Pheromone ,Artificial intelligence ,business ,Metaheuristic ,Central element - Abstract
This paper studies the learning process in an ant colony optimization algorithm designed to solve the problem of ordering cars on an assembly line (car-sequencing problem). This problem has been shown to be NP-hard and evokes a great deal of interest among practitioners. Learning in an ant algorithm is achieved by using an artificial pheromone trail, which is a central element of this metaheuristic. Many versions of the algorithm are found in literature, the main distinction among them being the management of the pheromone trail. Nevertheless, few of them seek to perfect learning by modifying the internal structure of the trail. In this paper, a new pheromone trail structure is proposed that is specifically adapted to the type of constraints in the car-sequencing problem. The quality of the results obtained when solving three sets of benchmark problems is superior to that of the best solutions found in literature and shows the efficiency of the specialized trail.
- Published
- 2009
25. Qualitative possibilistic influence diagrams based on qualitative possibilistic utilities
- Author
-
Nahla Ben Amor, Wided Guezguez, and Khaled Mellouli
- Subjects
Mathematical optimization ,Decision support system ,Relative value ,Information Systems and Management ,General Computer Science ,Relation (database) ,business.industry ,Decision theory ,Ordinal analysis ,Management Science and Operations Research ,Industrial and Manufacturing Engineering ,Modeling and Simulation ,Probability distribution ,Influence diagram ,Artificial intelligence ,business ,Possibility theory ,Mathematics - Abstract
This paper proposes a new approach for decision making under uncertainty based on influence diagrams and possibility theory. The so-called qualitative possibilistic influence diagrams extend standard influence diagrams in order to avoid difficulties attached to the specification of both probability distributions relative to chance nodes and utilities relative to value nodes. In fact, generally, it is easier for experts to quantify dependencies between chance nodes qualitatively via possibility distributions and to provide a preferential relation between different consequences. In such a case, the possibility theory offers a suitable modeling framework. Different combinations of the quantification between chance and utility nodes offer several kinds of possibilistic influence diagrams. This paper focuses on qualitative ones and proposes an indirect evaluation method based on their transformation into possibilistic networks. The proposed approach is implemented via a possibilistic influence diagram toolbox (PIDT).
- Published
- 2009
26. An asset residual life prediction model based on expert judgments
- Author
-
Wenjuan Zhang and Weiyue Wang
- Subjects
Information Systems and Management ,General Computer Science ,Operations research ,business.industry ,Computer science ,Condition-based maintenance ,System identification ,Condition monitoring ,Management Science and Operations Research ,Machine learning ,computer.software_genre ,Residual ,Industrial and Manufacturing Engineering ,Modeling and Simulation ,Probability distribution ,Artificial intelligence ,Asset (economics) ,business ,Set (psychology) ,Random variable ,computer - Abstract
An appropriate and accurate residual life prediction for an asset is essential for cost effective and timely maintenance planning and scheduling. The paper reports the use of expert judgments as the additional information to predict a regularly monitored asset’s residual life. The expert judgment is made on the basis of measured condition monitoring parameters, and is treated as a random variable, which may be described by a probability distribution due to the uncertainty involved. Since most expert judgments are in the form of a set of integer numbers, we can either directly use a discrete distribution or use a continuous distribution after some transformation. A key concept used in this paper is condition residual life where the residual life at the point of checking is conditional on, among others, the past expert judgments made on the same asset to date. Stochastic filtering theory is used to predict the residual life given available expert judgments. Artificial, simulated and real data are used for validating and testing the model developed.
- Published
- 2008
27. Building confidence in models for multiple audiences: The modelling cascade
- Author
-
Colin Eden, Susan Howick, Fran Ackermann, and Terry Williams
- Subjects
Information Systems and Management ,General Computer Science ,business.industry ,Process (engineering) ,Computer science ,Management Science and Operations Research ,Business model ,Data science ,Industrial and Manufacturing Engineering ,Cascade ,Modeling and Simulation ,Transparency (graphic) ,Organizational learning ,Artificial intelligence ,business - Abstract
This paper reports on a model building process developed to enable multiple audiences, particularly non-experts, to appreciate the validity of the models being built and their outcomes. The process is a four stage reversible cascade. This cascade provides a structured, auditable/transparent, formalized process from “real world” interviews generating a rich qualitative model through two intermediate steps before arriving at a quantitative simulation model. There are a number of advantages of the cascade process including; achieving comprehensiveness, developing organizational learning, testing the veracity of multiple perspectives, modeling transparency, achieving common understanding across many audiences and promoting confidence building in the models. The paper, based on extensive work with organizations, discusses both the cascade process and its inherent benefits.
- Published
- 2008
28. Quality control system design through the goal programming model and the satisfaction functions
- Author
-
Belaid Aouni, Habib Chabchoub, and Mohamed Sadok Cherif
- Subjects
Mathematical optimization ,Information Systems and Management ,General Computer Science ,Computer science ,business.industry ,media_common.quotation_subject ,Control (management) ,Context (language use) ,Management Science and Operations Research ,Industrial and Manufacturing Engineering ,Preference ,Quality control system ,Modeling and Simulation ,Goal programming model ,Goal programming ,Factory (object-oriented programming) ,Quality (business) ,Artificial intelligence ,business ,media_common - Abstract
The goal programming (GP) model has been utilized for designing a quality control system (QCS) where several features are simultaneously considered. In the context of the quality control, the parameters can be imprecise and expressed through intervals. The aim of this paper is to propose two formulations for designing a QCS based on the imprecise GP model. The concept of satisfaction functions will be utilized to integrate explicitly the decision-maker’s preference. The developed formulations are illustrated through an example of a paper factory.
- Published
- 2008
29. A hybrid genetic algorithm for the resource-constrained project scheduling problem
- Author
-
Vicente Valls, Francisco Ballestín, and M. Sacramento Quintanilla
- Subjects
Schedule ,education.field_of_study ,Mathematical optimization ,Information Systems and Management ,General Computer Science ,Computer science ,business.industry ,Resource constrained ,Crossover ,Population ,Management Science and Operations Research ,Industrial and Manufacturing Engineering ,Project scheduling problem ,Modeling and Simulation ,Genetic algorithm ,Artificial intelligence ,business ,Heuristics ,education - Abstract
In this paper we propose a Hybrid Genetic Algorithm (HGA) for the Resource-Constrained Project Scheduling Problem (RCPSP). HGA introduces several changes in the GA paradigm: a crossover operator specific for the RCPSP; a local improvement operator that is applied to all generated schedules; a new way to select the parents to be combined; and a two-phase strategy by which the second phase re-starts the evolution from a neighbour’s population of the best schedule found in the first phase. The computational results show that HGA is a fast and high quality algorithm that outperforms all state-of-the-art algorithms for the RCPSP known by the authors of this paper for the instance sets j60 and j120. And that it is competitive with other state-of-the-art heuristics for the instance set j30.
- Published
- 2008
30. Neural networks and seasonality: Some technical considerations
- Author
-
Bruce Curry
- Subjects
Information Systems and Management ,General Computer Science ,Series (mathematics) ,Artificial neural network ,Computer science ,business.industry ,Perspective (graphical) ,Context (language use) ,Management Science and Operations Research ,Seasonality ,medicine.disease ,Industrial and Manufacturing Engineering ,Autoregressive model ,Modeling and Simulation ,Bounded function ,medicine ,Feedforward neural network ,Artificial intelligence ,business - Abstract
Debate continues regarding the capacity of feedforward neural networks (NNs) to deal with seasonality without pre-processing. The purpose of this paper is to provide, with examples, some theoretical perspective for the debate. In the first instance it considers possible specification errors arising through use of autoregressive forms. Secondly, it examines seasonal variation in the context of the so-called ‘universal approximation’ capabilities of NNs, finding that a short (bounded) sinusoidal series is easy for the network but that a series with many turning points becomes progressively more difficult. This follows from results contained in one of the seminal papers on NN approximation. It is confirmed in examples which also show that, to model seasonality with NNs, very large numbers of hidden nodes may be required.
- Published
- 2007
31. Control and voting power in corporate networks: Concepts and computational aspects
- Author
-
Luc Leruth and Yves Crama
- Subjects
Structure (mathematical logic) ,Information Systems and Management ,Theoretical computer science ,General Computer Science ,Computational complexity theory ,business.industry ,Financial networks ,Computer science ,media_common.quotation_subject ,Control (management) ,Management Science and Operations Research ,Industrial and Manufacturing Engineering ,Modeling and Simulation ,Voting ,Algorithmics ,Artificial intelligence ,business ,Game theory ,media_common - Abstract
This paper proposes to rely on power indices to measure the amount of control held by individual shareholders in corporate networks. The value of the indices is determined by a complex voting game viewed as the composition of interlocked weighted majority games; the compound game reflects the structure of shareholdings. The paper describes an integrated algorithmic approach which allows to deal efficiently with the complexity of computing power indices in shareholding networks, irrespective of their size or structure. In particular, the approach explicitly accounts for the presence of float and of cyclic shareholding relationships. It has been successfully applied to the analysis of real-world financial networks.
- Published
- 2007
32. Aspiration level approach in stochastic MCDM problems
- Author
-
Maciej A. Nowak
- Subjects
Decision support system ,Mathematical optimization ,Information Systems and Management ,Interactive programming ,General Computer Science ,business.industry ,Stochastic dominance ,Management Science and Operations Research ,Multiple-criteria decision analysis ,Industrial and Manufacturing Engineering ,Set (abstract data type) ,Modeling and Simulation ,Goal programming ,Probability distribution ,Artificial intelligence ,business ,Mathematics ,Decision analysis - Abstract
The paper considers a discrete stochastic multiple criteria decision making problem. This problem is defined by a finite set of actions A, a set of attributes X and a set of evaluations of actions with respect to attributes E. In stochastic case the evaluation of each action with respect to each attribute takes form of a probability distribution. Thus, the comparison of two actions leads to the comparison of two vectors of probability distributions. In the paper a new procedure for solving this problem is proposed. It is based on three concepts: stochastic dominance, interactive approach, and preference threshold. The idea of the procedure comes from the interactive multiple objective goal programming approach. The set of actions is progressively reduced as the decision maker specifies additional requirements. At the beginning the decision maker is asked to define preference threshold for each attribute. Next, at each iteration the decision maker is confronted with the set of considered actions. If the decision maker is able to make a final choice then the procedure ends, otherwise he/she is asked to specify aspiration level. A didactical example is presented to illustrate the proposed technique.
- Published
- 2007
33. Dynamic programming and board games: A survey
- Author
-
David K. Smith
- Subjects
Dynamic programming ,Information Systems and Management ,Mode (computer interface) ,General Computer Science ,business.industry ,Modeling and Simulation ,ComputingMilieux_PERSONALCOMPUTING ,Artificial intelligence ,Management Science and Operations Research ,business ,Industrial and Manufacturing Engineering - Abstract
In several of the earliest papers on dynamic programming (DP), reference was made to the possibility that the DP approach might be used to advise players on the optimal strategy for board games such as chess. Since these papers in the 1950s, there have been many attempts to develop such strategies, drawing on ideas from DP and other branches of mathematics. This paper presents a survey of those where a dynamic programming approach has been useful, or where such a formulation of the problem will allow further insight into the optimal mode of play.
- Published
- 2007
34. Fuzzy trees in decision support systems
- Author
-
Marjan Vezjak, Tomaž Savšek, and Nikola Pavešić
- Subjects
Information Systems and Management ,Fuzzy classification ,General Computer Science ,Neuro-fuzzy ,Mathematics::General Mathematics ,business.industry ,Management Science and Operations Research ,Machine learning ,computer.software_genre ,Defuzzification ,Fuzzy logic ,Industrial and Manufacturing Engineering ,ComputingMethodologies_PATTERNRECOGNITION ,Information Fuzzy Networks ,Modeling and Simulation ,Fuzzy number ,Fuzzy set operations ,ComputingMethodologies_GENERAL ,Artificial intelligence ,business ,computer ,Membership function ,Mathematics - Abstract
This paper is based on the following assumption: that there exists a fuzzy tree structure and a distance between fuzzy trees which provides the basis for fuzzy decision-making. The paper provides the following: (1) a new definition of the fuzzy relational tree structure, (2) the development of a new comparative method for fuzzy trees and its experimental testing and evaluation, (3) a new descriptive method of military structures in a fuzzy tree format and the development of a fuzzy decision support system.
- Published
- 2006
35. Model combination in neural-based forecasting
- Author
-
António J. L. Rodrigues and Paulo Sérgio Abreu Freitas
- Subjects
Adaptive methods ,Decision support system ,Information Systems and Management ,General Computer Science ,Adaptive method ,Model combination ,Management Science and Operations Research ,Machine learning ,computer.software_genre ,Industrial and Manufacturing Engineering ,Gaussian radial basis function ,Faculdade de Ciências Exatas e da Engenharia ,symbols.namesake ,Radial basis function ,Gaussian process ,Mathematics ,Series (mathematics) ,Artificial neural network ,business.industry ,Optimal decision-making ,Modeling and Simulation ,symbols ,Artificial intelligence ,business ,computer ,Neural networks ,Forecasting ,Optimal decision - Abstract
This paper discusses different ways of combining neural predictive models or neural-based forecasts. The proposed approaches consider Gaussian radial basis function networks, which can be efficiently identified and estimated through recursive/adaptive methods. The usual framework for linearly combining estimates from different models is extended, to cope with the case where the forecasting errors from those models are correlated. A prefiltering methodology is proposed, addressing the problems raised by heavily nonstationary time series. Moreover, the paper discusses two approaches for decision-making from forecasting models: either inferring decisions from combined predictive estimates, or combining prescriptive solutions derived from different forecasting models.
- Published
- 2006
36. A compensation-based recurrent fuzzy neural network for dynamic system identification
- Author
-
Cheng-Jian Lin and Cheng-Hung Chen
- Subjects
Adaptive neuro fuzzy inference system ,Information Systems and Management ,General Computer Science ,Neuro-fuzzy ,Artificial neural network ,business.industry ,System identification ,Fuzzy control system ,Management Science and Operations Research ,Defuzzification ,Fuzzy logic ,Industrial and Manufacturing Engineering ,Control theory ,Modeling and Simulation ,Adaptive system ,Artificial intelligence ,business ,Mathematics - Abstract
In this paper, a type of compensation-based recurrent fuzzy neural network (CRFNN) for identifying dynamic systems is proposed. The proposed CRFNN uses a compensation-based fuzzy reasoning method, and has feedback connections added in the rule layer of the CRFNN. The compensation-based fuzzy reasoning method can make the fuzzy logic system more adaptive and effective, and the additional feedback connections can solve temporal problems. The CRFNN model is proven to be a universal approximator in this paper. Moreover, an online learning algorithm is proposed to automatically construct the CRFNN. The results from simulations of identifying dynamic systems have shown that the convergence speed of the proposed method is faster than the convergence speed of conventional methods and that only a small number of tuning parameters are required.
- Published
- 2006
37. A rational approach to handling fuzzy perceptions in route choice
- Author
-
Turan Arslan and C. Jotin Khisty
- Subjects
Decision support system ,Information Systems and Management ,General Computer Science ,Operations research ,Heuristic ,business.industry ,Probabilistic logic ,Analytic hierarchy process ,Inference ,Management Science and Operations Research ,Fuzzy logic ,Industrial and Manufacturing Engineering ,Modeling and Simulation ,Pairwise comparison ,Artificial intelligence ,business ,Preference (economics) ,Mathematics - Abstract
The purpose of this paper is to develop a heuristic way for handling fuzzy perceptions in explaining route choice behavior from behavioral point of view. A hybrid model where route choice decision making is described in a hierarchy uses concepts from fuzzy logic and the analytical hierarchy process (AHP) is proposed for making possible a more proper description of route choice behavior in transportation systems. Teodorovic and Kikuchi’s [Transportation route choice model using fuzzy inference technique, Paper presented at the First International Symposium on: Uncertainty Modeling and Analysis: Fuzzy Reasoning, Probabilistic Models, and Risk Management, University of College Park, Maryland, 1990, p. 140] fuzzy ‘if-then’ rules are adopted to represent a typical driver’s psychology for capturing essential preferences, pairwise, among alternatives that a driver may consider. The AHP is then incorporated in this model to capture the imaginary psychological process that represent underlying observable behavior to estimate drivers’ preference allotment among the alternatives. This new procedure is applied in a real world sample based on stated values of subjects. Findings show that this method provides intuitively and statistically promising results.
- Published
- 2006
38. Integrating Bayesian networks and decision trees in a sequential rule-based transportation model
- Author
-
Davy Janssens, Geert Wets, Koen Vanhoof, TA Theo Arentze, Harry Timmermans, Tom Brijs, Urban Planning and Transportation, Built Environment, and Information Systems Built Environment
- Subjects
Decision support system ,Information Systems and Management ,General Computer Science ,Decision engineering ,business.industry ,Computer science ,Decision tree learning ,Decision tree ,Bayesian network ,Rule-based system ,Decision rule ,Management Science and Operations Research ,Machine learning ,computer.software_genre ,ransportation ,activity-based transportation modelling ,Bayesian ,networks ,decision trees ,BNT classifier ,CHAID ,Industrial and Manufacturing Engineering ,Modeling and Simulation ,Influence diagram ,Artificial intelligence ,business ,computer ,Optimal decision ,Decision analysis - Abstract
Several activity-based transportation models are now becoming operational and are entering the stage of application for the modelling of travel demand. Some of these models use decision rules to support its decision-making instead of principles of utility maximization. Decision rules can be derived from different modelling approaches. In a previous study, it was shown that Bayesian networks outperform decision trees and that they are better suited to capture the complexity of the underlying decision-making. However, one of the disadvantages is that Bayesian networks are somewhat limited in terms of interpretation and efficiency when rules are derived from the network, while rules derived from decision trees in general have a simple and direct interpretation. Therefore, in this study, the idea of combining decision trees and Bayesian networks was explored in order to maintain the potential advantages of both techniques. The paper reports the findings of a methodological study that was conducted in the context of Albatross, which is a sequential rule based model of activity scheduling behaviour. To this end, the paper can be situated within the context of a series of previous publications by the authors to improve decision-making in Albatross. The results of this study suggest that integrated Bayesian networks and decision trees can be used for modelling the different choice facets of Albatross with better predictive power than CHAID decision trees. Another conclusion is that there are initial indications that the new way of integrating decision trees and Bayesian networks has produced a decision tree that is structurally more stable.
- Published
- 2006
39. Weighted Elo rating for tennis match predictions
- Author
-
Luca De Angelis, Giovanni Angelini, Vincenzo Candila, Angelini, Giovanni, Candila, Vincenzo, and De Angelis, Luca
- Subjects
Betting strategy ,Elo rating ,Forecasting ,Tennis ,Information Systems and Management ,General Computer Science ,Computer science ,0211 other engineering and technologies ,Sample (statistics) ,02 engineering and technology ,Management Science and Operations Research ,Machine learning ,computer.software_genre ,Outcome (game theory) ,Industrial and Manufacturing Engineering ,0502 economics and business ,050210 logistics & transportation ,021103 operations research ,Point (typography) ,business.industry ,05 social sciences ,Tenni ,Modeling and Simulation ,Artificial intelligence ,business ,computer - Abstract
Originally applied to tennis by the data journalists of FiveThirtyEight.com, the Elo rating method estimates the strength of each player based on her/his career as well as the outcome of the last match played. Together with the regression-based, point-based and paired-comparison approaches, the Elo rating is a popular method to predict the probability of winning tennis matches. Notwithstanding its widely recognized merits in terms of ease of reproducibility and good performance, the Elo method does not completely take into account the current form of each player and their recent performances. This paper proposes a new version of the Elo rating method, labelled Weighted Elo (WElo), where the standard Elo updating is additionally weighted according to the scoreline of the players’ last match. The proposed method considers not only if a player has won (lost) a match, but also how the victory (defeat) was achieved. In the empirical application, the forecasting performance of the WElo method is evaluated and compared against the most popular forecasting methods in tennis, using a sample of over 60,000 men’s and women’s professional matches. Overall, the WElo method outperforms all these competing methods. Moreover, it provides meaningfully profitable opportunities, according to a simple betting strategy.
- Published
- 2022
40. Applying machine learning for the anticipation of complex nesting solutions in hierarchical production planning
- Author
-
Christian Gahm, Chantal Ganschinietz, Axel Tuma, Aykut Uzunoglu, and Stefan Wahl
- Subjects
Information Systems and Management ,General Computer Science ,Computer science ,Feature vector ,0211 other engineering and technologies ,Scheduling (production processes) ,02 engineering and technology ,Management Science and Operations Research ,Machine learning ,computer.software_genre ,Industrial and Manufacturing Engineering ,0502 economics and business ,050210 logistics & transportation ,021103 operations research ,Artificial neural network ,Job shop scheduling ,business.industry ,05 social sciences ,Production planning ,Anticipation (artificial intelligence) ,Modeling and Simulation ,Nesting (computing) ,Artificial intelligence ,ddc:004 ,Heuristics ,business ,computer - Abstract
In hierarchical production planning, the consideration of interdependencies between superior top-level decisions and subordinate base-level decisions is essential. In this respect, the anticipation of base-level reactions is highly recommended. In this paper, we consider an example from the metal-processing industry: a serial-batch scheduling problem constitutes the top-level problem and a complex nesting problem constitutes the base-level problem. The top-level scheduling decision includes a batching decision, i.e., the determination of a set of small items to be cut out of a large slide. Thus, to evaluate the feasibility of a batch, the base-level nesting problem must be solved. Because solving nesting problems is time consuming even when applying heuristics, it is troublesome to solve it multiple times during solving the top-level scheduling problem. Instead, we propose an approximative anticipation of base-level reactions by machine learning to approximate batch feasibility. To that, we present a prediction framework to identify the most promising machine learning method for the prediction (regression) task. For applying these methods, we propose new feature vectors describing the characteristics of complex nesting problem instances. For training, validation, and testing, we present a new instance generation procedure that uses a set of 6,000 convex, concave, and complex shapes to generate 88,200 nesting instances. The testing results show that an artificial neural network achieves the lowest expected loss (root mean squared error). Depending on further assumptions, we can report that the approximate anticipation based on machine learning predictions leads to an appropriate batch feasibility decision for 98.8% of the nesting instances.
- Published
- 2022
41. Machine learning at the service of meta-heuristics for solving combinatorial optimization problems: A state-of-the-art
- Author
-
El-Ghazali Talbi, Mehrdad Mohammadi, Patrick Meyer, Maryam Karimi-Mamaghan, Amir Mohammad Karimi-Mamaghan, Optimisation de grande taille et calcul large échelle [BONUS], Equipe DECIDE (Lab-STICC_DECIDE), Laboratoire des sciences et techniques de l'information, de la communication et de la connaissance (Lab-STICC), Institut Mines-Télécom [Paris] (IMT)-IMT Atlantique Bretagne-Pays de la Loire (IMT Atlantique), Institut Mines-Télécom [Paris] (IMT)-École Nationale d'Ingénieurs de Brest (ENIB)-École Nationale Supérieure de Techniques Avancées Bretagne (ENSTA Bretagne)-Université de Bretagne Sud (UBS)-Université de Brest (UBO)-Centre National de la Recherche Scientifique (CNRS)-Université Bretagne Loire (UBL)-Institut Mines-Télécom [Paris] (IMT)-IMT Atlantique Bretagne-Pays de la Loire (IMT Atlantique), Institut Mines-Télécom [Paris] (IMT)-École Nationale d'Ingénieurs de Brest (ENIB)-École Nationale Supérieure de Techniques Avancées Bretagne (ENSTA Bretagne)-Université de Bretagne Sud (UBS)-Université de Brest (UBO)-Centre National de la Recherche Scientifique (CNRS)-Université Bretagne Loire (UBL), Département Logique des Usages, Sciences sociales et Sciences de l'Information (IMT Atlantique - LUSSI), IMT Atlantique Bretagne-Pays de la Loire (IMT Atlantique), Institut Mines-Télécom [Paris] (IMT)-Institut Mines-Télécom [Paris] (IMT), University of Tehran, Université de Lille - UFR des Humanités (Lille UFRH), Université de Lille, Optimisation de grande taille et calcul large échelle (BONUS), Inria Lille - Nord Europe, Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria)-Centre de Recherche en Informatique, Signal et Automatique de Lille - UMR 9189 (CRIStAL), Centrale Lille-Université de Lille-Centre National de la Recherche Scientifique (CNRS)-Centrale Lille-Université de Lille-Centre National de la Recherche Scientifique (CNRS), Centre de Recherche en Informatique, Signal et Automatique de Lille - UMR 9189 (CRIStAL), Centrale Lille-Université de Lille-Centre National de la Recherche Scientifique (CNRS), Centre de Recherche en Informatique, Signal et Automatique de Lille - UMR 9189 [CRIStAL], École Nationale d'Ingénieurs de Brest (ENIB)-Université de Bretagne Sud (UBS)-Université de Brest (UBO)-École Nationale Supérieure de Techniques Avancées Bretagne (ENSTA Bretagne)-Institut Mines-Télécom [Paris] (IMT)-Centre National de la Recherche Scientifique (CNRS)-Université Bretagne Loire (UBL)-IMT Atlantique (IMT Atlantique), Institut Mines-Télécom [Paris] (IMT)-École Nationale d'Ingénieurs de Brest (ENIB)-Université de Bretagne Sud (UBS)-Université de Brest (UBO)-École Nationale Supérieure de Techniques Avancées Bretagne (ENSTA Bretagne)-Institut Mines-Télécom [Paris] (IMT)-Centre National de la Recherche Scientifique (CNRS)-Université Bretagne Loire (UBL)-IMT Atlantique (IMT Atlantique), Institut Mines-Télécom [Paris] (IMT), IMT Atlantique (IMT Atlantique), and Université de Lille - Faculté des Humanités (Lille Humanités)
- Subjects
Service (systems architecture) ,Information Systems and Management ,General Computer Science ,Computer science ,media_common.quotation_subject ,0211 other engineering and technologies ,Initialization ,02 engineering and technology ,Management Science and Operations Research ,Machine learning ,computer.software_genre ,Industrial and Manufacturing Engineering ,[INFO.INFO-AI]Computer Science [cs]/Artificial Intelligence [cs.AI] ,Robustness (computer science) ,Taxonomy (general) ,0502 economics and business ,[INFO]Computer Science [cs] ,Quality (business) ,Metaheuristic ,ComputingMilieux_MISCELLANEOUS ,media_common ,050210 logistics & transportation ,021103 operations research ,business.industry ,05 social sciences ,[INFO.INFO-RO]Computer Science [cs]/Operations Research [cs.RO] ,Modeling and Simulation ,Key (cryptography) ,State (computer science) ,Artificial intelligence ,business ,computer - Abstract
International audience; Mixed Integer Linear Programs (MILPs) are usually NP-hard mathematical programming problems, which present difficulties to obtain optimal solutions in a reasonable time for large scale models. Nowadays, metaheuristics are one of the potential tools for solving this type of problems in any context. In this paper, we focus our attention on MILPs in the specific framework of Data Envelopment Analysis (DEA), where the determination of a score of technical efficiency of a set of Decision Making Units (DMUs) is one of the main objectives. In particular, we propose a new hyper-matheuristic grounded on a MILP-based decomposition in which the optimization problem is divided into two hierarchical subproblems. The new approach decomposes the model into discrete and continuous variables, treating each subproblem through different optimization methods. In particular, metaheuristics are used for dealing with the discrete variables, whereas exact methods are used for the set of continuous variables. The metaheuristics use an indirect representation that encodes an incomplete solution for the problem, whereas the exact method is applied to decode the solution and generate a complete solution. The experimental results, based on simulated data in the context of Data Envelopment Analysis, show that the solutions obtained through the new approach outperform those found by solving the problem globally using a metaheuristic method. Finally, regarding the new hyper-matheuristic scheme, the best algorithm selection is found for a set of cooperative metaheuristics ans exact optimization algorithms.
- Published
- 2022
42. MOP/GP models for machine learning
- Author
-
Yeboon Yun, Min Yoon, Takeshi Asada, and Hirotaka Nakayama
- Subjects
Information Systems and Management ,General Computer Science ,Linear programming ,business.industry ,Feature vector ,Management Science and Operations Research ,Machine learning ,computer.software_genre ,Industrial and Manufacturing Engineering ,Inductive programming ,Support vector machine ,Hyperplane ,Margin (machine learning) ,Modeling and Simulation ,Goal programming ,Quadratic programming ,Artificial intelligence ,business ,computer ,Mathematics - Abstract
Techniques for machine learning have been extensively studied in recent years as effective tools in data mining. Although there have been several approaches to machine learning, we focus on the mathematical programming (in particular, multi-objective and goal programming; MOP/GP) approaches in this paper. Among them, Support Vector Machine (SVM) is gaining much popularity recently. In pattern classification problems with two class sets, its idea is to find a maximal margin separating hyperplane which gives the greatest separation between the classes in a high dimensional feature space. This task is performed by solving a quadratic programming problem in a traditional formulation, and can be reduced to solving a linear programming in another formulation. However, the idea of maximal margin separation is not quite new: in the 1960s the multi-surface method (MSM) was suggested by Mangasarian. In the 1980s, linear classifiers using goal programming were developed extensively. This paper presents an overview on how effectively MOP/GP techniques can be applied to machine learning such as SVM, and discusses their problems.
- Published
- 2005
43. An intelligent agent model
- Author
-
Ralf Schleiffer
- Subjects
Decision support system ,Information Systems and Management ,General Computer Science ,business.industry ,Computer science ,Multi-agent system ,Autonomous agent ,Management Science and Operations Research ,computer.software_genre ,ComputingMethodologies_ARTIFICIALINTELLIGENCE ,Industrial and Manufacturing Engineering ,Embodied agent ,Intelligent agent ,Human–computer interaction ,Modeling and Simulation ,Reinforcement learning ,Artificial intelligence ,Architecture ,Agent architecture ,business ,computer - Abstract
This paper discusses fundamental issues of intelligent agents. Based on a portrayal of agent characteristics a general agent architecture linking aspects of perception, interpretation of natural language, learning and decision-making is provided. Agents built upon this architecture are equipped to handle unknown, open and distributed environments. The paper concludes with a discussion whether or not agents designed in accordance with this architecture exhibit some sort of intelligence.
- Published
- 2005
44. Optimal software testing in the setting of controlled Markov chains
- Author
-
Wei-Yi Ning, Yong-Chao Li, and Kai-Yuan Cai
- Subjects
Test strategy ,Model-based testing ,Mathematical optimization ,Information Systems and Management ,General Computer Science ,business.industry ,Computer science ,White-box testing ,Risk-based testing ,Random testing ,Software performance testing ,Management Science and Operations Research ,Industrial and Manufacturing Engineering ,Software ,Modeling and Simulation ,Non-regression testing ,Test Management Approach ,Software reliability testing ,Artificial intelligence ,business ,Orthogonal array testing ,Test data ,Dynamic testing - Abstract
The controlled Markov chains (CMC) approach to software testing treats software testing as a control problem, where the software under test serves as a controlled object that is modeled as controlled Markov chain, and the software testing strategy serves as the corresponding controller. In this paper we extend the CMC approach to software testing to the case that the number of tests that can be applied to the software under test is limited. The optimal testing strategy is derived if the true values of all the software parameters of concern are known a priori. An adaptive testing strategy is employed if the true values of the software parameters of concern are not known a priori and need to be estimated on-line during software testing by using testing data. A random testing strategy ignores all the related information (true values or estimates) of the software parameters of concern and follows a uniform probability distribution to select a possible test case. Simulation results show that the performance of an adaptive testing strategy cannot compete that of the optimal testing strategy, but should be better than that of a random testing strategy. This paper further justifies the idea of software cybernetics that is aimed to explore the interplay between software theory/engineering and control theory/engineering.
- Published
- 2005
45. Selecting IS personnel use fuzzy GDSS based on metric distance method
- Author
-
Ching-Hsue Cheng and Ling-Show Chen
- Subjects
Decision support system ,Information Systems and Management ,General Computer Science ,business.industry ,Management Science and Operations Research ,Machine learning ,computer.software_genre ,Defuzzification ,Fuzzy logic ,Industrial and Manufacturing Engineering ,Fuzzy number ranking ,Coincident ,Modeling and Simulation ,Ranking SVM ,Fuzzy number ,Fuzzy set operations ,Artificial intelligence ,Data mining ,business ,computer ,Mathematics - Abstract
In this paper we propose a new approach to rank fuzzy numbers by metric distance. For showing our method is a good ranking method, we give two examples to compare with other methods. The paper also developes a computer-based group decision support system, FMCGDSS, to increase the recruiting productivity and to easily compare our method with other fuzzy number ranking methods. The FMCGDSS includes three ranking methods: intuition ranking, Lee and Li's fuzzy mean/spread and our metric distance method to help manager make better decision under fuzzy circumstance. The result indicates that the new method is coincident with the intuition ranking and the Lee and Li's fuzzy mean/spread method on each type weight.
- Published
- 2005
46. A comparison of discrete and continuous neural network approaches to solve the class/teacher timetabling problem
- Author
-
Margarida Vaz Pato and Marco Paulo Carrasco
- Subjects
Class (computer programming) ,Mathematical optimization ,Information Systems and Management ,General Computer Science ,Artificial neural network ,business.industry ,Computer science ,Computer Science::Neural and Evolutionary Computation ,Function (mathematics) ,Management Science and Operations Research ,Industrial and Manufacturing Engineering ,Software ,Modeling and Simulation ,Genetic algorithm ,Artificial intelligence ,business ,Heuristics ,Stochastic neural network ,Metaheuristic - Abstract
This study explores the application of neural network-based heuristics to the class/teacher timetabling problem (CTTP). The paper begins by presenting the problem characteristics in terms of hard and soft constraints and proposing a formulation for the energy function required to map the issue within the artificial neural network model. There follow two distinct approaches to simulating neural network evolution. The first uses a Potts mean-field annealing simulation based on continuous Potts neurons, which has obtained favorable results in various combinatorial optimization problems. Afterwards, a discrete neural network simulation, with discrete winner-takes-all neurons, is proposed. The paper concludes with a comparison of the computational results taken from the application of both heuristics to hard hypothetical and real CTTP instances. This experiment demonstrates that the discrete approach performs better, in terms of solution quality as well as execution time. By extending the comparison, the neural discrete solutions are also compared with those obtained from a multiobjective genetic algorithm, which is already being successfully used for this problem within a timetabling software application.
- Published
- 2004
47. Data mining with genetic algorithms on binary trees
- Author
-
Gerrit K. Janssens and Kenneth Sörensen
- Subjects
Information Systems and Management ,Binary tree ,General Computer Science ,business.industry ,Decision tree learning ,Weight-balanced tree ,Management Science and Operations Research ,Interval tree ,computer.software_genre ,Machine learning ,Industrial and Manufacturing Engineering ,Random binary tree ,Binary search tree ,Modeling and Simulation ,Ternary search tree ,Binary expression tree ,Data mining ,Artificial intelligence ,business ,computer ,Mathematics - Abstract
This paper focuses on the automatic interaction detection (AID)-technique, which belongs to the class of decision tree data mining techniques. The AID-technique explains the variance of a dependent variable through an exhaustive and repeated search of all possible relations between the (binary) predictor variables and the dependent variable. This search results in a tree in which non-terminal nodes represent the binary predictor variables, edges represent the possible values of these predictor variables and terminal nodes or leafs correspond to classes of subjects. Despite of being self-evident, the AID-technique has its weaknesses. To overcome these drawbacks a technique is developed that uses a genetic algorithm to find a set of diverse classification trees, all having a large explanatory power. From this set of trees, the data analyst is able to choose the tree that fulfils his requirements and does not suffer from the weaknesses of the AID-technique. The technique developed in this paper uses some specialised genetic operators that are devised to preserve the structure of the trees and to preserve high fitness from being destroyed. An implementation of the algorithm exists and is freely available. Some experiments were performed which show that the algorithm uses an intensification stage to find high-fitness trees. After that, a diversification stage recombines high-fitness building blocks to find a set of diverse solutions.
- Published
- 2003
48. Comparison of fuzzy and crisp systems via system dynamics simulation
- Author
-
Seçkin Polat and Cafer Erhan Bozdaǧ
- Subjects
Information Systems and Management ,General Computer Science ,Neuro-fuzzy ,business.industry ,Fuzzy set ,Management Science and Operations Research ,Type-2 fuzzy sets and systems ,Fuzzy logic ,Defuzzification ,Industrial and Manufacturing Engineering ,Modeling and Simulation ,Fuzzy number ,Fuzzy set operations ,Fuzzy associative matrix ,Artificial intelligence ,business ,Mathematics - Abstract
This paper compares fuzzy and classical decision rules. The hypothesis of this paper is that whether one of these rules is superior depends on the situation. For that comparison the paper uses system dynamics (SD), which models the behavior of systems including human beings. This comparison was made for a simple heating system that is controlled by a human operator. Under various changes of external and internal parameters, the results are that the major differences between fuzzy and crisp systems emerge at extreme values of these parameters. In conclusion, the superiority of crisp rules or fuzzy rules in a decision-making environment depends on the situation.
- Published
- 2002
49. A clone-based graphical modeler and mathematical model generator for optimal production planning in process industries
- Author
-
I. Kuban Altınel, Nijaz Bajgoric, Burak Birgören, Ali Tamer Ünal, and Murat Draman
- Subjects
Decision support system ,Information Systems and Management ,General Computer Science ,Linear programming ,business.industry ,Programming language ,Computer science ,Management Science and Operations Research ,Work in process ,computer.software_genre ,Machine learning ,Industrial and Manufacturing Engineering ,Object-oriented design ,Production planning ,Modeling and Simulation ,Clone (computing) ,Artificial intelligence ,Graphical model ,business ,computer ,Generator (mathematics) - Abstract
This paper outlines a visually interactive graphical modeling approach for process type production systems, with hidden generation of complex optimization models for production planning. The proposed system lets the users build a graphical model of the production system with one-to-one clones of its production units through its interactive visual interface, accepts production-specific data for its components, and finally, internally generates and solves its mathematical programming model without any interaction from the user. This “clone-based” modeling approach allows the continued use of optimization models with minimal mathematical programming understanding, as generation of mathematical model by clones is hidden and automatic, therefore maintenance-free: Updating graphical production system models is enough for renewing internal optimization models. The concept is demonstrated in this paper with a linear programming prototype developed for a petroleum refinery.
- Published
- 2002
50. A comparative study of the effect of the position of outliers on classical and nontraditional approaches to the two-group classification problem
- Author
-
Robert Pavur
- Subjects
Multivariate statistics ,Information Systems and Management ,General Computer Science ,business.industry ,Management Science and Operations Research ,Machine learning ,computer.software_genre ,Industrial and Manufacturing Engineering ,k-nearest neighbors algorithm ,Statistical classification ,ComputingMethodologies_PATTERNRECOGNITION ,Discriminant ,Position (vector) ,Modeling and Simulation ,Parametric model ,Outlier ,Artificial intelligence ,business ,computer ,Parametric statistics ,Mathematics - Abstract
Popularity of nontraditional approaches to the statistical classification problem has resulted from the potential of these techniques to outperform the standard parametric procedures under conditions when nonnormality is present. Thus proponents of these nontraditional models have recommended these models when outliers are in the data. However, research showing that these nontraditional models' performances can vary widely depending on where the outlier data are located has not been fully illustrated. The research in this paper demonstrates how the mathematical programming approaches and the nearest neighbor discriminant models can be affected by the position of contaminated normal data and that each of the models studied in this paper may not be robust to all types of outliers in the data. The results of this paper are also important because the study compares two recently proposed mathematical programming models as well as two versions of the nearest neighbor model with the standard classical parametric models. This combination of classification models does not appear to have been studied together under conditions of contaminated normal data in which numerous positions of the outliers are considered.
- Published
- 2002
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.