28 results on '"Saber Saati"'
Search Results
2. Centralized resource allocation to create New Most Productive Scale Size (MPSS) DMUs
- Author
-
Kamyar Nojoumi, Saber Saati, and Leila Khoshandam
- Subjects
Management Science and Operations Research ,Computer Science Applications ,Theoretical Computer Science - Abstract
Data envelopment analysis (DEA) is a mathematical programming - based technique to evaluate the performance of a homogeneous group of decision-making units (DMUs) with multiple inputs and outputs. One of the DEA applications involves aggregating input resources and reallocating them to create efficient DMUs.The present study employs the centralized resource allocation (CRA) approach to develop a model for creating new DMUs. These new DMUs are the most productive scale size (MPSS), and all new DMUs lie on a strong supporting hyperplane. In this case, a dual model is used to obtain the strong supporting hyperplane which all new DMUs lie on. This hyperplane is derived by solving the dual model and generating a common set of weights. Then, it is shown that all new DMUs lie on a strong supporting hyperplane, and an MPSS facet is the intersection of this hyperplane with the production possibility set (PPS).
- Published
- 2022
3. R-number Cognitive Map Method for Modeling Problems in Uncertainty and Risky Environment
- Author
-
Mostafa Izadi, Rassoul Noorossana, Hamidreza Izadbakhsh, Saber Saati, and Mohammad Khalilzadeh
- Subjects
Computational Theory and Mathematics ,Artificial Intelligence ,Software ,Theoretical Computer Science - Published
- 2022
4. Mathematical Model and Artificial Intelligence for Diagnosis of Alzheimer's Disease
- Author
-
Afsaneh Davodabadi, Behrooz Daneshian, Saber Saati, and Shabnam Razavyan
- Abstract
Degeneration of the neurological system linked to cognitive deficits, daily living exercise clutters, and behavioral disturbing impacts may define Alzheimer's disease. Ad research conducted later in life focuses on describing ways for early detection of dementia, a kind of mental disorder. To tailor our care to each patient, we utilized visual cues to determine how they were feeling. We did this by outlining two approaches to diagnosing a person's mental health. Support vector machine is the first technique (SVM). Image characteristics are extracted using a fractal model for classification in this method. With this technique, the histogram of a picture is modeled after a Gaussian distribution. Classification was performed with several SVM kernels, and the outcomes were compared. Step two proposes using a deep convolutional neural network (DCNN) architecture to identify Alzheimer's-related mental disorders. According to the findings, the SVM approach accurately recognized over 93% of the photos tested. The DCNN approach was one hundred percent accurate during model training, whereas the SVM approach achieved just 93 percent accuracy. In contrast to SVM's accuracy of 89.3%, the DCNN model test's findings were accurate 98.8% of the time. Based on the findings reported here, the proposed DCNN architecture may be used for diagnostic purposes involving the patient's mental state.
- Published
- 2023
5. Microblogs recommendations based on implicit similarity in content social networks
- Author
-
Elham Mazinan, Hassan Naderi, Saber Saati, and Mitra Mirzarezaee
- Subjects
020203 distributed computing ,Information retrieval ,Computer science ,Microblogging ,Analytic hierarchy process ,02 engineering and technology ,Telecommunications network ,Social relation ,Theoretical Computer Science ,Hardware and Architecture ,Similarity (psychology) ,User group ,0202 electrical engineering, electronic engineering, information engineering ,Social media ,Software ,Information Systems - Abstract
With the development of online social networking applications, microblogs have become a necessary online communication network in daily life. Users are interested in obtaining personalized recommendations related to their tastes and needs. In some microblog systems, tags are not available, or the use of tags is rare. In addition, user-specified social relations are extremely rare. Hence, sparsity is a problem in microblog systems. To address this problem, we propose a new framework called Pblog to alleviate sparsity. Pblog identifies users’ interests via their microblogs and social relations and computes implicit similarity among users using a new algorithm. The experimental results indicated that the use of this algorithm can improve the results. In online social networks, such as Twitter, the number of microblogs in the system is high, and it is constantly increasing. Therefore, providing personalized recommendations to target users requires considerable time. To address this problem, the Pblog framework groups similar users using the analytic hierarchy process (AHP) method. Then, Pblog prunes microblogs of the target user group and recommends microblogs with higher ratings to the target user. In the experimental results section, the Pblog framework was compared with several other frameworks. All of these frameworks were run on two datasets: Twitter and Tumblr. Based on the results of these comparisons, the Pblog framework provides more appropriate recommendations to the target user than previous frameworks.
- Published
- 2021
6. Measuring congestion in sustainable supply chain based on data envelopment analysis
- Author
-
Reza Farzipoor Saen, Amin Mostafaee, Maryam Shadab, Mohammad Khoveyni, and Saber Saati
- Subjects
0209 industrial biotechnology ,Supply chain management ,Sustainable supply chain ,Supply chain ,Network structure ,Context (language use) ,02 engineering and technology ,Environmental economics ,Key issues ,Intermediate product ,020901 industrial engineering & automation ,Artificial Intelligence ,0202 electrical engineering, electronic engineering, information engineering ,Data envelopment analysis ,020201 artificial intelligence & image processing ,Business ,Software - Abstract
Sustainable Supply Chain Management (SSCM) involves the integrating of environmental, social and economic concerns into supply chain management activities with emphasis on the managers' efforts in the context of reducing the negative social and environmental impacts. Evaluating sustainable supply chain performance and efficiency is a significant topic for many researchers and scholars. Presence of input and intermediate product congestion is one of the key issues that results in lower efficiency and performance in a sustainable supply chain. Therefore, determination of congestion is of prime importance and removing it improves performance of the sustainable supply chain. One of the most appropriate methods for detecting congestion is Data Envelopment Analysis (DEA). Some studies have been conducted to detect the intermediate product congestion via solving Network DEA (NDEA) models without considering the role of intermediate products. In this study, a sustainable supply chain with two-stage structure was considered. Then, the congestion status according to the role of intermediate products was found out for the first time. Towards this aim, different scenarios which congestion can occur in intermediate products were identified. Then, in each scenario, the dominant cone definition was developed in network structure and NDEA models were proposed. Finally 20 Iranian sustainable supply chains of Resin manufacturing companies have been used to demonstrate applicability of the proposed models.
- Published
- 2021
7. Data envelopment analysis with fuzzy complex numbers with an empirical case on power plants of iran
- Author
-
Mahmood Esfandiari and Saber Saati
- Subjects
Set (abstract data type) ,Mathematical optimization ,Rank (linear algebra) ,Linear programming ,Ranking ,Computer science ,Data envelopment analysis ,Fuzzy number ,Management Science and Operations Research ,Complex number ,Data type ,Computer Science Applications ,Theoretical Computer Science - Abstract
Using Data Envelopment Analysis (DEA) in complex environment is an idea that has recently presented for measuring the relative efficiencies of a set of Decision Making Units (DMUs) with complex inputs and outputs. The values of the input and output data in real-world problems appear sometimes as fuzzy complex number. For dealing with these types of data in DEA, we need to design a new model. This paper proposes a DEA model with triangular fuzzy complex numbers and solve it by using the concept of the data size and the α-level approach. This method transforms DEA model with fuzzy complex data to a linear programing problem with crisp data. In the following, a ranking model is also developed using the above approach to rank the efficient DMUs. The proposed method is presented for the first time by the authors and there is no similar method. Finally, we present a case study in the generators of the steam power plants to demonstrate the applicability of the proposed methods in the power industry.
- Published
- 2021
8. A slacks-based nonlinear DEA model with integer data: an application to departments of the Islamic Azad University, Karaj Branch in Iran
- Author
-
Zahra Moazenzadeh, Saber Saati, Reza Farzipoor Saen, Reza Kazemi Matin, and Sevan Sohraee
- Subjects
Statistics and Probability ,Management of Technology and Innovation ,Modeling and Simulation ,Management Science and Operations Research ,Statistics, Probability and Uncertainty - Abstract
Although Data Envelopment Analysis (DEA) assumes that inputs and outputs take non-negative real values, in some realworld cases, data are integer-valued. In some situations, rounding a fractional value to the closest integer can lead to a misleading evaluation of efficiency and in some cases may lead to an infeasible projection point. To date, various radial and non-radial models have been presented. This paper proposes a slacks-based non-linear model that guarantees an integer-valued reference point for all integer targets. Also, the reference point of each target is feasible under the proposed model. The lack of a need to round answers to the closest whole value is an advantage of this method. In addition, the results of this model are compared with other models. An example is used to clarify the suggested method.
- Published
- 2022
9. How to implemented Knowledge management in supply chain management Best-Worst with D-number
- Author
-
Javad Navaei, Soheila Sardar, and Saber Saati
- Subjects
General Computer Science ,General Engineering ,General Materials Science ,Electrical and Electronic Engineering - Published
- 2023
10. Measuring performance with common weights: network DEA
- Author
-
Adel Hatami-Marbini and Saber Saati
- Subjects
Flexibility (engineering) ,0209 industrial biotechnology ,Performance management ,Operations research ,Computer science ,02 engineering and technology ,020901 industrial engineering & automation ,Artificial Intelligence ,Data envelopment analysis ,Black box ,0202 electrical engineering, electronic engineering, information engineering ,Production (economics) ,020201 artificial intelligence & image processing ,Performance measurement ,Performance improvement ,Performance Measurement ,Software ,Network model - Abstract
The file attached to this record is the author's final peer reviewed version. The Publisher's final version can be found by following the DOI link. In conventional data envelopment analysis (DEA), a production system has been seen as a black box for measuring the efficiency without any attention to what is happening inside the system. However, in practice, performance improvement often requires observing the internal structure of the producing system in order to find the sources of inefficiencies. In addition, weight flexibility as a key property of the multiplier DEA models allows a system to totally disregard an assessment factor, either input or output, from the evaluation process by assigning a value of zero or epsilon to its weight. This paper contributes to the existing literature by proposing a common-weights DEA model when the production system includes a number of interrelated processes. To this end, we propose an aggregate DEA model to calculate the most favourable common weights for determining the efficiency of all production systems and their processes at the same time. Our proposed aggregate model not only is linear for equitably evaluating the producing units on the same scale, but also enables us to deal with the mixed network structures. Furthermore, the network system is decomposed into a series system to build a relational network DEA model that emphasises separate relatedness. This greatly reduces the computational complexities for enormous volumes of data in many real applications and treat difficulties in network DEA models including the zero value and fluctuating weights and multiple solutions. Managerially speaking, this paper aims to provide the top management team of a production system with an integrated framework to shape a better strategic decision process about firm performance, which is to treat the sources of inefficiencies and ultimately take corrective actions over the long run. Put differently, the proposed framework helps top managers make proper decisions in complex situations with a view of improving a firm’s efficiency in all production divisions, which can be identified as a core competency leading to competitive advantages of the organisation. In the context of performance management, our study is equipped with a simple numerical example and a case study of the non-life insurance companies to demonstrate the applicability of the proposed common-weights network model.
- Published
- 2019
11. The Use of Bootstrap for Weight Control in Data Envelopment Analysis
- Author
-
Alireza Amirteimoori, Saber Saati, and Akbar Amiri
- Subjects
Statistics ,Data envelopment analysis ,General Social Sciences ,Weight control ,General Economics, Econometrics and Finance ,Mathematics - Published
- 2018
12. Personalized microblog recommendations based on trust propagation and implicit microblog similarity
- Author
-
Saber Saati, Hassan Naderi, Mitra Mirzarezaee, and Elham Mazinan
- Subjects
Information retrieval ,General Computer Science ,Similarity (network science) ,Microblogging ,Computer science ,Social media ,Theoretical Computer Science - Published
- 2021
13. Industrial parts change recognition model using machine vision, image processing in the framework of industrial information integration
- Author
-
Saber Saati, Majid Mirbod, Ali Rajabzadeh Ghatari, and Maryam Shoar
- Subjects
Production line ,Information Systems and Management ,Computer science ,Machine vision ,media_common.quotation_subject ,people.cause_of_death ,Industrial engineering ,Industrial and Manufacturing Engineering ,Visual inspection ,Rail accident ,Brake ,Applied research ,Quality (business) ,people ,media_common ,Information integration - Abstract
This paper presents the industrial parts change recognition model using machine vision image processing in the framework of industrial information integration and it is applied research's category. Therefore, this study implements a new industrial information integration engineering (IIIE)system by combining components that have previously been expressed separately by previous research to develop inspection of industrial parts and improve its quality and accuracy of human visual inspection status and this is the innovative aspect of this research. We used machine vision to improve human vision in change recognition in objects such as cracks, fractures… and that has been the research issue. So, this study aims to aggregate different tools identified by other researchers in change recognition in different parts because human vision is weak, and current mechanical tools such as sensors have several problems such as calibration and accurate maintenance and repair and errors. The proposed model can be used in two fields, first, recognition of the difference between family-made industrial parts with the standard sample in the production lines online, second, monitoring and controlling changes in the working industrial parts such as moving rails, train wheels, brake discs, clutch plates, various motor parts, etc. In the first case, every manufacturer needs to produce parts, such as the standard prototype for the production line. Therefore, if the machinery of the production line is out of calibration for any reason, products will be made out of standard, which would mean a reduction in the quality or an increase in production costs. Therefore, online monitoring of production lines with the help of machine vision is important in reducing production costs. In the second case, recognition of the timely change in the working industrial parts can also prevent accidents. In other words, timely detection of failure can be effective in preventing accidents in addition to reducing costs. For example, the timely detection of a train wheel or train rail wear and its timely repair or replacement will prevent the occurrence of a rail accident, which is one of the applications of the proposed model. Since the results of this study are compared with those of industrial parts with standard samples, the results are sufficiently valid. The study was conducted by the authors and no organization was involved.
- Published
- 2022
14. Measuring congestion by anchor points in DEA
- Author
-
Amin Mostafaee, Reza Farzipoor Saen, Maryam Shadab, and Saber Saati
- Subjects
Mathematical optimization ,021103 operations research ,Multidisciplinary ,Anchor point ,Computer science ,0211 other engineering and technologies ,0202 electrical engineering, electronic engineering, information engineering ,Data envelopment analysis ,Efficient frontier ,020201 artificial intelligence & image processing ,02 engineering and technology ,Production function - Abstract
One of the most important issues in microeconomics is congestion. In general, an increase in inputs will result in an increase in outputs. However, in some cases, it does not happen. Hence, in these situations congestion occurs. The existence of the congestion reduces efficiency of Decision Making Units (DMUs), so determination of congestion is highly regarded. Some studies suggested methods to determine the congestion via solving conventional Data Envelopment Analysis (DEA) models, in which first an inefficient unit was depicted on the BCC frontier. However, sometimes, some optimal projections are obtained, where some previous models encounter problems. In this paper, according to S-shape form of the production function and with respect to the geometric features of anchor points, we have developed an algorithm by the connection between the anchor points and congestion definition. In this algorithm, with no need for efficiency value and projecting the inefficient DMUs on BCC efficiency frontier, only by determining the anchor point with the largest output and comparing inefficient units with it, with an easier calculation, and solving conventional DEA models, congested DMUs and their status of congestion are obtained and their values are calculated. At the end, the proposed algorithm is illustrated by some examples and the results are compared to those of the existing methods.
- Published
- 2020
15. A novel Data Envelopment Analysis model with complex numbers: measuring the efficiency of electric generators in steam power plants
- Author
-
Saber Saati, Mehrzad Navabakhsh, Kaveh Khalili-Damghani, and Mahmood Esfandiari
- Subjects
Statistics and Probability ,business.industry ,Computer science ,Electric generator ,Economic growth, development, planning ,Management Science and Operations Research ,HD28-70 ,law.invention ,law ,Management of Technology and Innovation ,Modeling and Simulation ,Management. Industrial management ,HD72-88 ,Data envelopment analysis ,Statistics, Probability and Uncertainty ,Process engineering ,business ,Steam power ,Complex number - Abstract
The output of a generator in power plant is the electricity, and it consists of two parts, active and reactive power. These quantities are expressed as complex numbers in which the real part is the active power and the imaginary part is the reactive power. Reactive power plays an important role in an electricity network. Ignoring it will exclude a lot of information. With regard to the importance of the generators in power plants, surely, calculating the efficiency of these units is of great importance. Data Envelopment Analysis (DEA) is a nonparametric approach to measure the relative efficiency of Decision-Making Units (DMUs). Since the generators data are complex numbers, thus, if we the use classical DEA models in order to measure the efficiency of the generators in power plants, the reactive power cannot be considered, and the measurement is limited to the real number of electric power. In this paper, a new DEA model with complex numbers is developed in order to assess the performance of the power plant generators. (original abstract)
- Published
- 2019
16. Data envelopment analysis in service quality evaluation: an empirical study
- Author
-
Seyedvahid Najafi, Saber Saati, and Madjid Tavana
- Subjects
Perceived service quality index ,Service (business) ,Service quality ,Measure (data warehouse) ,Operations research ,Computer science ,SERVQUAL model ,Benchmarking ,Industrial and Manufacturing Engineering ,Loyalty business model ,SERVQUAL ,Empirical research ,Data envelopment analysis ,ddc:650 ,Slack-based measure ,Operations management - Abstract
Service quality is often conceptualized as the comparison between service expectations and the actual performance perceptions. It enhances customer satisfac- tion, decreases customer defection, and promotes customer loyalty. Substantial literature has examined the concept of service quality, its dimensions, and measurement methods. We introduce the perceived service quality index (PSQI) as a single measure for evaluating the multiple-item service quality construct based on the SERVQUAL model. A slack-based measure (SBM) of efficiency with constant inputs is used to calculate the PSQI. In addition, a non- linear programming model based on the SBM is proposed to delineate an improvement guideline and improve service quality. An empirical study is conducted to assess the applicability of the method proposed in this study. A large number of studies have used DEA as a benchmarking tool to measure service quality. These models do not propose a coherent performance evaluation construct and conse- quently fail to deliver improvement guidelines for improving service quality. The DEA models proposed in this study are designed to evaluate and improve service quality within a comprehensive framework and without any dependency on external data.
- Published
- 2014
17. A FUZZY DATA ENVELOPMENT ANALYSIS FOR CLUSTERING OPERATING UNITS WITH IMPRECISE DATA
- Author
-
Per Joakim Agrell, Adel Hatami-Marbini, Madjid Tavana, and Saber Saati
- Subjects
Computer science ,media_common.quotation_subject ,Fuzzy set ,Fuzzy input and output data ,Ambiguity ,computer.software_genre ,Fuzzy logic ,Clustering ,Ranking ,Artificial Intelligence ,Control and Systems Engineering ,Data envelopment analysis ,Cluster (physics) ,Production (economics) ,Priority ,Data mining ,Cluster analysis ,computer ,Florida border patrol ,Software ,Information Systems ,media_common - Abstract
Data envelopment analysis (DEA) is a non-parametric method for measuring the efficiency of peer operating units that employ multiple inputs to produce multiple outputs. Several DEA methods have been proposed for clustering operating units. However, to the best of our knowledge, the existing methods in the literature do not simultaneously consider the priority between the clusters (classes) and the priority between the operating units in each cluster. Moreover, while crisp input and output data are indispensable in traditional DEA, real-world production processes may involve imprecise or ambiguous input and output data. Fuzzy set theory has been widely used to formalize and represent the impreciseness and ambiguity inherent in human decision-making. In this paper, we propose a new fuzzy DEA method for clustering operating units in a fuzzy environment by considering the priority between the clusters and the priority between the operating units in each cluster simultaneously. A numerical example and a case study for the Jet Ski purchasing decision by the Florida Border Patrol are presented to illustrate the efficacy and the applicability of the proposed method.
- Published
- 2013
18. How do customers evaluate hotel service quality? An empirical study in Tehran hotels
- Author
-
Mohammad Kazem Bighami, Seyedvahid Najafi, Saber Saati, and Farshid Abdi
- Subjects
Service (business) ,Service quality ,Knowledge management ,Quality management ,business.industry ,lcsh:HF5735-5746 ,media_common.quotation_subject ,SERVQUAL model ,Hotel industry ,lcsh:Business records management ,General Business, Management and Accounting ,SERVQUAL ,Empirical research ,Scale (social sciences) ,Quality (business) ,Marketing ,Structural equation ,Psychology ,business ,Reliability (statistics) ,media_common - Abstract
The purpose of this study is to investigate the dimensio ns of hotel service quality, to ass ess relative importance of them and to evaluate service quality of Tehran hotels in terms of guests’ perspectives. The paper examines the reliability and validity of the designed scale based on SERVQUAL model. A cross-sectional research based on SERVQUAL model conducted on nine hotels in Tehran (n=1080). Several statistical analyses such as EFA, CFA, Linear regression and t-test were applied to analyze the data. Five service quality dimensions were identified and named as “tangibles”, “problem solving”, “service supply”, “empathy” and “security”. Even though our findings confirmed five dimensional SERVQUAL constructs, some dimensions have been identified differing from SERVQUAL scale dimensions. Finding showed that the best overall service quality predictor is “tangibles” followed by “service supply”, “problem solving”, “assurance” and “empathy”.
- Published
- 2013
19. A common set of weight approach using an ideal decision making unit in data envelopment analysis
- Author
-
Per Joakim Agrell, Madjid Tavana, Saber Saati, and Adel Hatami-Marbini
- Subjects
Mathematical optimization ,Control and Optimization ,Ideal (set theory) ,Linear programming ,Process (engineering) ,Applied Mathematics ,Strategy and Management ,Small number ,Efficiency ,Utility regulation ,Measure (mathematics) ,Set (abstract data type) ,Ideal DMU ,Dimension (vector space) ,Data envelopment analysis ,Common set of weights ,Business and International Management ,Mathematics - Abstract
Data envelopment analysis (DEA) is a common non-parametric frontier analysis method. The multiplier framework of DEA allows flexibility in the selection of endogenous input and output weights of decision making units (DMUs) as to cautiously measure their efficiency. The calculation of DEA scores requires the solution of one linear program per DMU and generates an individual set of endogenous weights (multipliers) for each performance dimension. Given the large number of DMUs in real applications, the computational and conceptual complexities are considerable with weights that are potentially zero-valued or incommensurable across units. In this paper, we propose a two-phase algorithm to address these two problems. In the first step, we define an ideal DMU (IDMU) which is a hypothetical DMU consuming the least inputs to secure the most outputs. In the second step, we use the IDMU in a LP model with a small number of constraints to determine a common set of weights (CSW). In the final step of the process, we calculate the efficiency of the DMUs with the obtained CSW. The proposed model is applied to a numerical example and to a case study using panel data from 286 Danish district heating plants to illustrate the applicability of the proposed method.
- Published
- 2012
20. Data Envelopment Analysis with Fuzzy Parameters
- Author
-
Saber Saati, Madjid Tavana, and Adel Hatami-Marbini
- Subjects
Engineering ,Fuzzy data ,Information Systems and Management ,Linear programming ,Computer Networks and Communications ,business.industry ,Process (engineering) ,Construct (python library) ,computer.software_genre ,Fuzzy logic ,Computer Science Applications ,Management Information Systems ,Set (abstract data type) ,Computational Theory and Mathematics ,Ranking ,Data envelopment analysis ,Fuzzy set operations ,Data mining ,business ,Yager index ,computer ,Information Systems - Abstract
Data envelopment analysis (DEA) is a methodology for measuring the relative efficiencies of a set of decision making units (DMUs) that use multiple inputs to produce multiple outputs. In the conventional DEA, all the data assume the form of specific numerical values. However, the observed values of the input and output data in real-life problems are sometimes imprecise or vague. Previous methods have not considered the preferences of the decision makers (DMs) in the evaluation process. This paper proposes an interactive evaluation process for measuring the relative efficiencies of a set of DMUs in fuzzy DEA with consideration of the DMs’ preferences. The authors construct a linear programming (LP) model with fuzzy parameters and calculate the fuzzy efficiency of the DMUs for different a levels. Then, the DM identifies his or her most preferred fuzzy goal for each DMU under consideration. A modified Yager index is used to develop a ranking order of the DMUs. This study allows the DMs to use their preferences or value judgments when evaluating the performance of the DMUs.
- Published
- 2011
21. An ideal-seeking fuzzy data envelopment analysis framework
- Author
-
Adel Hatami-Marbini, Saber Saati, and Madjid Tavana
- Subjects
Mathematical optimization ,Ideal (set theory) ,Rank (computer programming) ,computer.software_genre ,Fuzzy logic ,Fuzzy mathematical programming ,Euclidean distance ,Set (abstract data type) ,Efficiency ,Theory of displaced ideal ,Data envelopment analysis ,Nadir ,Data mining ,computer ,Software ,Mathematics - Abstract
Data envelopment analysis (DEA) is a widely used mathematical programming approach for evaluating the relative efficiency of decision making units (DMUs) in organizations. Crisp input and output data are fundamentally indispensable in traditional DEA evaluation process. However, the input and output data in real-world problems are often imprecise or ambiguous. In this study, we present a four-phase fuzzy DEA framework based on the theory of displaced ideal. Two hypothetical DMUs called the ideal and nadir DMUs are constructed and used as reference points to evaluate a set of information technology (IT) investment strategies based on their Euclidean distance from these reference points. The best relative efficiency of the fuzzy ideal DMU and the worst relative efficiency of the fuzzy nadir DMU are determined and combined to rank the DMUs. A numerical example is presented to demonstrate the applicability of the proposed framework and exhibit the efficacy of the procedures and algorithms.
- Published
- 2010
22. Iranian railway efficiency (1971-2004): An application of DEA
- Author
-
A. R. Vahidi, Saber Saati, and Mohammad Mehdi Movahedi
- Subjects
Operations research ,Data envelopment analysis ,Mathematics - Abstract
In this paper, the efficiency of Iranian Railway is evaluated using Data Envelopment Analysis (DEA) method. In this way, the railway activities from 1971 to 2004 are considered and the efficiency of each year is calculated and compared to the other years. Also, the results of ten years are shown efficient. The railway efficiency in the years after 1995 are shown as efficient years, except the year 2000 which is considerable from the view of management and decision making.
- Published
- 2007
23. Reducing weight flexibility in fuzzy DEA
- Author
-
Saber Saati and A. Memariani
- Subjects
Set (abstract data type) ,Flexibility (engineering) ,Computational Mathematics ,Applied Mathematics ,Statistics ,Fuzzy number ,Fuzzy data envelopment analysis ,Decision maker ,Upper and lower bounds ,Fuzzy logic ,Outcome (probability) ,Mathematics - Abstract
An important outcome of assessing relative efficiencies within a group of decision making units (DMUs) in fuzzy data envelopment analysis is a set of virtual multipliers or weights accorded to each (input or output) factor taken into account. These sets of weights are, typically, different for each of the participating DMUs, and in some cases it may be considered unacceptable that the same factor is accorded widely differing weights. Thus, it is important to find a common set of weights (CSW) across the set of DMUs. In this paper, by assessing upper bounds on factor weights and compacting the resulted intervals, a CSW is determined. Since resulted efficiencies by the proposed CSW are fuzzy numbers rather than crisp values, it is more informative for decision maker.
- Published
- 2005
24. [Untitled]
- Author
-
Saber Saati, Gholam Reza Jahanshahloo, and A. Memariani
- Subjects
Mathematical optimization ,Fuzzy classification ,Artificial Intelligence ,Logic ,Fuzzy mathematics ,Fuzzy set ,Fuzzy number ,Fuzzy set operations ,Fuzzy associative matrix ,Defuzzification ,Fuzzy logic ,Software ,Mathematics - Abstract
In this paper, a fuzzy version of CCR model (Charnes, Cooper and Rhodes (1978)) with asymmetrical triangular fuzzy number is presented and a procedure is suggested for its solution. The basic idea is to transform the fuzzy CCR model into a crisp linear programming problem by applying an alternative α-cut approach. Thereby, the problem is converted to an interval programming. In this method, instead of comparing the equality (or inequality) of two intervals, a variable is defined in the interval, not only satisfies the set of constraints, but also maximizes the efficiency value. We also propose a ranking method for fuzzy DMUs using presented fuzzy DEA approach. To demonstrate the concept, numerical examples are solved and solutions are compared with Guo and Tanaka (2001).
- Published
- 2002
25. A fuzzy linear programming model with fuzzy parameters and decision variables
- Author
-
Saber Saati, Adel Hatami-Marbini, Madjid Tavana, and Elham Hajiakhondi
- Subjects
Mathematical optimization ,Information Systems and Management ,Fuzzy classification ,Duality ,Neuro-fuzzy ,FLP ,Strategy and Management ,Fuzzy set ,Fuzzy linear programming ,Trapezoidal fuzzy numbers ,Type-2 fuzzy sets and systems ,Defuzzification ,Fuzzy logic ,Complementary slackness theory ,Computer Science Applications ,Management of Technology and Innovation ,Fuzzy set operations ,Fuzzy number ,Fuzzy set theory ,Algorithm ,Mathematics - Abstract
Linear programming (LP) is an optimisation technique most widely used for optimal allocation of limited resources amongst competing activities. Precise data are fundamentally indispensable in standard LP problems. However, the observed values of the data in real-world problems are often imprecise or vague. Fuzzy set theory has been extensively used to represent ambiguous, uncertain or imprecise data in LP by formalising the inaccuracies inherent in human decision-making. We propose a new method for solving fuzzy LP (FLP) problems in which the right-hand side parameters and the decision variables are represented by fuzzy numbers. A new fuzzy ranking model and a new supplementary variable are utilised in the proposed FLP method to obtain the fuzzy and crisp optimal solutions by solving one LP model. Moreover, we introduce an alternative model with deterministic variables and parameters derived from the proposed FLP model. Interestingly, the result of the alternative model is identical to the crisp solution of the proposed FLP model. We use a numerical example from the FLP literature for comparison purposes and to demonstrate the applicability of the proposed method and exhibit the efficacy of the procedure.
- Published
- 2015
26. Efficiency measurement in fuzzy additive data envelopment analysis
- Author
-
Saber Saati, Adel Hatami-Marbini, Ali Emrouznejad, and Madjid Tavana
- Subjects
Uncertain data ,Fuzzy sets theory ,Comparability ,computer.software_genre ,Fuzzy logic ,Industrial and Manufacturing Engineering ,Fuzzy additive model ,Control and Systems Engineering ,Data envelopment analysis ,Fuzzy number ,Fuzzy set operations ,Data mining ,computer ,Mathematics ,Intuition - Abstract
Performance evaluation in conventional data envelopment analysis (DEA) requires crisp numerical values. However, the observed values of the input and output data in real-world problems are often imprecise or vague. These imprecise and vague data can be represented by linguistic terms characterised by fuzzy numbers in DEA to reflect the decision-makers' intuition and subjective judgements. This paper extends the conventional DEA models to a fuzzy framework by proposing a new fuzzy additive DEA model for evaluating the efficiency of a set of decision-making units (DMUs) with fuzzy inputs and outputs. The contribution of this paper is threefold: (1) we consider ambiguous, uncertain and imprecise input and output data in DEA, (2) we propose a new fuzzy additive DEA model derived from the a-level approach and (3) we demonstrate the practical aspects of our model with two numerical examples and show its comparability with five different fuzzy DEA methods in the literature. Copyright © 2011 Inderscience Enterprises Ltd.
- Published
- 2012
27. Improving the computational complexity and weights dispersion in fuzzy DEA
- Author
-
Saber Saati and Adel Hatami-Marbini
- Subjects
Mathematical optimization ,Fuzzy classification ,Computational complexity theory ,Data envelopment analysis ,Fuzzy sets theory ,Fuzzy set operations ,Statistical dispersion ,Common set of weights ,Fuzzy logic ,Mathematics - Abstract
One of the prominent features of standard and fuzzy data envelopment analysis (DEA) is the representation of each of the participating decision making units (DMUs) in the best possible light. This causes two problems; first, the different set of factor weights with large number of zeros and second a large number of linear programming models to solve. In this paper, we propose an efficient method to address these two problems. In proposed method by solving just one linear programming a Common Set of Weights (CSW) is achieved in fuzzy DEA. Since resulted efficiencies by the proposed CSW are interval numbers rather than crisp values, it is more informative for decision maker. The proposed model is applied to a numerical example to demonstrate the concept.
28. Simulation-based multi-criteria evaluation of cost-risk-effectiveness in prognostic maintenance operations: A case study from railway industry
- Author
-
Saber Saati, Fateme Dinmohammadi, Ashraf Labib, Babakalli Alkali, Mahmood Shafiee, Baraldi, Piero, Di Maio, Francesco, and Zio, Enrico
- Subjects
Risk analysis (engineering) ,Computer science ,Order (business) ,Transport network ,TOPSIS ,Train ,Profitability index ,TJ ,Asset (computer security) ,TA403 ,Preventive maintenance ,TF ,Reliability (statistics) - Abstract
Maintenance of physical assets plays a key role in a company's long-term profitability and has increasingly become an important element of business performance in the last decades. However, many organizations are under pressure to reduce their costs associated with maintenance operations while keeping high level of safety and reliability. Currently, the performance of maintenance programs in industries varies from corrective repair or replacement (through which the assets are restored to working condition in case of a failure) to prognostic programs (with the aim of detecting anomalies as early as possible to prevent a failure). To achieve the aim of reducing cost and increasing efficiency, the asset owners must determine what maintenance program should be adopted for each group of assets and also know when, where and how the maintenance procedures must be conducted. Simulation models are one of the most powerful techniques to assess the efficiency/effectiveness of maintenance programs. A simulation model has the capability to take into account system parameter uncertainties as well as to explore the feasibility of novel maintenance strategies. This paper presents a simulation-based multi-criteria approach to assess the effectiveness of various maintenance programs adopted for industrial plants and public facilities (e.g. preventive maintenance (PM), risk-based maintenance (RBM), condition-based maintenance (CBM), etc.) to save cost and reduce risk. A maintenance program with the best "performance" is chosen for execution by means of Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS). A case study of passenger door systems of the Class 380 trains on the Scotland’s railway transport network is provided to demonstrate how to use the model. Based on the results, CBM was found to be superior to other maintenance programs for the train passenger door units, followed by mileage-based PM.
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.