19 results on '"Kulahci, Murat"'
Search Results
2. Stream-based active learning with linear models
- Author
-
Cacciarelli, Davide, Kulahci, Murat, and Tyssedal, John Sølve
- Published
- 2022
- Full Text
- View/download PDF
3. Monitoring batch processes with dynamic time warping and k-nearest neighbours
- Author
-
Spooner, Max and Kulahci, Murat
- Published
- 2018
- Full Text
- View/download PDF
4. On the structure of dynamic principal component analysis used in statistical process monitoring
- Author
-
Vanhatalo, Erik, Kulahci, Murat, and Bergquist, Bjarne
- Published
- 2017
- Full Text
- View/download PDF
5. Selecting local constraint for alignment of batch process data with dynamic time warping
- Author
-
Spooner, Max, Kold, David, and Kulahci, Murat
- Published
- 2017
- Full Text
- View/download PDF
6. Selection of non-zero loadings in sparse principal component analysis
- Author
-
Gajjar, Shriram, Kulahci, Murat, and Palazoglu, Ahmet
- Published
- 2017
- Full Text
- View/download PDF
7. Pig herd monitoring and undesirable tripping and stepping prevention
- Author
-
Gronskyte, Ruta, Clemmensen, Line Harder, Hviid, Marchen Sonja, and Kulahci, Murat
- Published
- 2015
- Full Text
- View/download PDF
8. In vivo Comet assay – statistical analysis and power calculations of mice testicular cells
- Author
-
Hansen, Merete Kjær, Sharma, Anoop Kumar, Dybdahl, Marianne, Boberg, Julie, and Kulahci, Murat
- Published
- 2014
- Full Text
- View/download PDF
9. Real-time fault detection and diagnosis using sparse principal component analysis.
- Author
-
Gajjar, Shriram, Kulahci, Murat, and Palazoglu, Ahmet
- Subjects
- *
REAL-time computing , *DEBUGGING , *PRINCIPAL components analysis , *ACQUISITION of data , *ELECTRONIC data processing - Abstract
With the emergence of smart factories, large volumes of process data are collected and stored at high sampling rates for improved energy efficiency, process monitoring and sustainability. The data collected in the course of enterprise-wide operations consists of information from broadly deployed sensors and other control equipment. Interpreting such large volumes of data with limited workforce is becoming an increasingly common challenge. Principal component analysis (PCA) is a widely accepted procedure for summarizing data while minimizing information loss. It does so by finding new variables, the principal components (PCs) that are linear combinations of the original variables in the dataset. However, interpreting PCs obtained from many variables from a large dataset is often challenging, especially in the context of fault detection and diagnosis studies. Sparse principal component analysis (SPCA) is a relatively recent technique proposed for producing PCs with sparse loadings via variance-sparsity trade-off. Using SPCA, some of the loadings on PCs can be restricted to zero. In this paper, we introduce a method to select the number of non-zero loadings in each PC while using SPCA. The proposed approach considerably improves the interpretability of PCs while minimizing the loss of total variance explained. Furthermore, we compare the performance of PCA- and SPCA-based techniques for fault detection and fault diagnosis. The key features of the methodology are assessed through a synthetic example and a comparative study of the benchmark Tennessee Eastman process. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
10. Statistical process control versus deep learning for power plant condition monitoring.
- Author
-
Hansen, Henrik Hviid, Kulahci, Murat, and Nielsen, Bo Friis
- Subjects
- *
STATISTICAL process control , *DEEP learning , *POWER plants , *PRINCIPAL components analysis - Abstract
This study compares four models for industrial condition monitoring including a principal components analysis (PCA) approach and three deep learning models, one of which is a new, lightweight version of another. We also propose a simple attention mechanism for enchancing deep learning models with better predictions and feature importance. Two datasets are used, one simulated from the Tennessee Eastman Process, the other from two feedwater pumps at a Danish combined heat and power plant. Our final results show evidence in favour of the PCA-based approach as it has detection ability comparable to the deep learning approaches as well as faster training time, fewer hyperparameters, as well as robustness to changing operating conditions. We conclude the paper by putting into perspective the importance of building up complexity incrementally with a recommendation to start modelling with simpler and well-tested models before the adoption of more advanced, less transparent models. • A statistical model is compared with deep learning techniques for fault detection. • The principal components model's fault detection is as good as deep learning. • Deep learning method, ARGUELite, shows promise for single regime datasets. • A unique dataset from a power plant feedwater pump was used for model evaluation. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
11. A novel fault detection and diagnosis approach based on orthogonal autoencoders.
- Author
-
Cacciarelli, Davide and Kulahci, Murat
- Subjects
- *
FAULT diagnosis , *STATISTICAL process control , *CHEMICAL processes , *MANUFACTURING processes , *QUALITY control charts , *PRINCIPAL components analysis , *LATENT variables - Abstract
• An unsupervised fault detection and diagnosis scheme based on orthogonal autoencoders is proposed for the monitoring of industrial and chemical processes. • The use of integrated gradients allows for exploration of the bottleneck of the autoencoder, providing candidate variables for the root cause analysis. • The proposed method performs well compared to traditional methods in offering compelling and interpretable results. • The analysis shows how the difference in the way faults are introduced affects the detection and diagnosis performances of principal component analysis-based control charts. In recent years, there have been studies focusing on the use of different types of autoencoders (AEs) for monitoring complex nonlinear data coming from industrial and chemical processes. However, in many cases the focus was placed on detection. As a result, practitioners are encountering problems in trying to interpret such complex models and obtaining candidate variables for root cause analysis once an alarm is raised. This paper proposes a novel statistical process control (SPC) framework based on orthogonal autoencoders (OAEs). OAEs regularize the loss function to ensure no correlation among the features of the latent variables. This is extremely beneficial in SPC tasks, as it allows for the invertibility of the covariance matrix when computing the Hotelling T 2 statistic, significantly improving detection and diagnosis performance when the process variables are highly correlated. To support the fault diagnosis and identification analysis, we propose an adaptation of the integrated gradients (IG) method. Numerical simulations and the benchmark Tennessee Eastman Process are used to evaluate the performance of the proposed approach by comparing it to traditional approaches as principal component analysis (PCA) and kernel PCA (KPCA). In the analysis, we explore how the information useful for fault detection and diagnosis is stored in the intermediate layers of the encoder network. We also investigate how the correlation structure of the data affects the detection and diagnosis of faulty variables. The results show how the combination of OAEs and IG represents a compelling and ready-to-use solution, offering improved detection and diagnosis performances over the traditional methods. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
12. Split-plot designs with mirror image pairs as sub-plots
- Author
-
Tyssedal, John, Kulahci, Murat, and Bisgaard, Søren
- Subjects
- *
MIRROR images , *ORTHOGONALIZATION , *GEOMETRIC analysis , *STOCHASTIC processes , *ANALYSIS of variance , *STRATEGIC planning - Abstract
Abstract: In this article we investigate two-level split-plot designs where the sub-plots consist of only two mirror image trials. Assuming third and higher order interactions negligible, we show that these designs divide the estimated effects into two orthogonal sub-spaces, separating sub-plot main effects and sub-plot by whole-plot interactions from the rest. Further we show how to construct split-plot designs of projectivity P≥3. We also introduce a new class of split-plot designs with mirror image pairs constructed from non-geometric Plackett–Burman designs. The design properties of such designs are very appealing with effects of major interest free from full aliasing assuming that 3rd and higher order interactions are negligible. [Copyright &y& Elsevier]
- Published
- 2011
- Full Text
- View/download PDF
13. An extended Tennessee Eastman simulation dataset for fault-detection and decision support systems.
- Author
-
Reinartz, Christopher, Kulahci, Murat, and Ravn, Ole
- Subjects
- *
DECISION support systems , *QUALITY control charts , *PRINCIPAL components analysis , *CUSUM technique , *CHEMICAL engineers , *CHEMICAL engineering , *DYNAMIC simulation - Abstract
• Simulation of 28 process faults at different fault-magnitudes for six operating modes. • Simulation of the process' reaction to changes of the control setpoints. • Healthy simulation data for other configurations than the original six process modes. • Simulation of dynamic transitions between operating modes with different transition times. • Performance benchmark for statistical fault-detection schemes for the presented dataset. The Tennessee Eastman Process (TEP) is a frequently used benchmark in chemical engineering research. An extended simulator, published in 2015, enables a more in-depth investigation of TEP, featuring additional, scalable process disturbances as well as an extended list of variables. Even though the simulator has been used multiple times since its release, the lack of a standardized reference dataset impedes direct comparisons of methods. In this contribution we present an extensive reference dataset, incorporating repeat simulations of healthy and faulty process data, additional measurements and multiple magnitudes for all process disturbances. All six production modes of TEP as well as mode transitions and operating points in a region around the modes are simulated. We further perform fault-detection based on principal component analysis combined with T 2 and Q charts using average run length as a performance metric to provide an initial benchmark for statistical process monitoring schemes for the presented data. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
14. Condition monitoring of wind turbine faults: Modeling and savings.
- Author
-
Hansen, Henrik Hviid, MacDougall, Neil, Jensen, Christopher Dam, Kulahci, Murat, and Nielsen, Bo Friis
- Subjects
- *
WIND turbines , *MONITORING of machinery , *TURBINE generators , *STATISTICAL process control , *COST structure , *MOVING average process , *LEAD time (Supply chain management) - Abstract
This paper presents a case study on condition monitoring of power generators at offshore wind turbines. Two fault detection models are proposed for detecting sudden changes in the sensed value of metallic debris at the generator. The first model uses an exponentially weighted moving average, while the second monitors first-order derivatives using a fixed threshold. This is expected to improve the maintenance activities by avoiding late or early part replacement. The economic impact of the proposed approach is also provided with a realistic depiction of the cost structure associated with the corresponding maintenance plan. While the specifics of the case study are supported by real-life data, considering the prevalence of the use of generators not only in offshore wind turbines but also in other production environments, we believe the case study covered in this paper can be used as a blueprint for similar studies in other applications. • Large scale condition monitoring demonstrates economic benefits. • Case study from real wind turbine population with economic impact assessment. • Sensor for monitoring fault mode results in lead time for maintenance planning. • Sensitivity analysis assesses results despite downtime and power price uncertainty. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
15. Harvest time prediction for batch processes.
- Author
-
Spooner, Max, Kold, David, and Kulahci, Murat
- Subjects
- *
BACTERIA , *FERMENTATION , *REGRESSION analysis , *NUMERICAL analysis , *MATHEMATICAL analysis - Abstract
Batch processes usually exhibit variation in the time at which individual batches are stopped (referred to as the harvest time). Harvest time is based on the occurrence of some criterion and there may be great uncertainty as to when this criterion will be satisfied. This uncertainty increases the difficulty of scheduling downstream operations and results in fewer completed batches per day. A real case study is presented of a bacteria fermentation process. We consider the problem of predicting the harvest time of a batch in advance to reduce variation and improving batch quality. Lasso regression is used to obtain an interpretable model for predicting the harvest time at an early stage in the batch. A novel method for updating the harvest time predictions as a batch progresses is presented, based on information obtained from online alignment using dynamic time warping. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
16. iCFD: Interpreted Computational Fluid Dynamics – Degeneration of CFD to one-dimensional advection-dispersion models using statistical experimental design – The secondary clarifier.
- Author
-
Guyonvarch, Estelle, Ramin, Elham, Kulahci, Murat, and Plósz, Benedek Gy
- Subjects
- *
COMPUTATIONAL fluid dynamics , *ADVECTION-diffusion equations , *BOUNDARY value problems , *EXPERIMENTAL design , *LATIN hypercube sampling - Abstract
The present study aims at using statistically designed computational fluid dynamics (CFD) simulations as numerical experiments for the identification of one-dimensional (1-D) advection-dispersion models – computationally light tools, used e.g., as sub-models in systems analysis. The objective is to develop a new 1-D framework, referred to as interpreted CFD (iCFD) models, in which statistical meta-models are used to calculate the pseudo-dispersion coefficient ( D ) as a function of design and flow boundary conditions. The method – presented in a straightforward and transparent way – is illustrated using the example of a circular secondary settling tank (SST). First, the significant design and flow factors are screened out by applying the statistical method of two-level fractional factorial design of experiments. Second, based on the number of significant factors identified through the factor screening study and system understanding, 50 different sets of design and flow conditions are selected using Latin Hypercube Sampling (LHS). The boundary condition sets are imposed on a 2-D axi-symmetrical CFD simulation model of the SST. In the framework, to degenerate the 2-D model structure, CFD model outputs are approximated by the 1-D model through the calibration of three different model structures for D . Correlation equations for the D parameter then are identified as a function of the selected design and flow boundary conditions (meta-models), and their accuracy is evaluated against D values estimated in each numerical experiment. The evaluation and validation of the iCFD model structure is carried out using scenario simulation results obtained with parameters sampled from the corners of the LHS experimental region. For the studied SST, additional iCFD model development was carried out in terms of (i) assessing different density current sub-models; (ii) implementation of a combined flocculation, hindered, transient and compression settling velocity function; and (iii) assessment of modelling the onset of transient and compression settling. Furthermore, the optimal level of model discretization both in 2-D and 1-D was undertaken. Results suggest that the iCFD model developed for the SST through the proposed methodology is able to predict solid distribution with high accuracy – taking a reasonable computational effort – when compared to multi-dimensional numerical experiments, under a wide range of flow and design conditions. iCFD tools could play a crucial role in reliably predicting systems' performance under normal and shock events. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
17. Transient risk of water layer formation on PCBAs in different climates: Climate data analysis and experimental study.
- Author
-
Conseil-Gudla, Helene, Spooner, Max, Kulahci, Murat, and Ambat, Rajan
- Subjects
- *
DEW point , *RELIABILITY of electronics , *DATA analysis , *ELECTRONIC packaging , *HUMIDITY , *STRAY currents - Abstract
The reliability of electronic devices depends on the environmental loads at which they are exposed. Climatic conditions vary greatly from one geographical location to another (from hot and humid to cold and dry areas), and the temperature and humidity vary from season to season and from day to day. High levels of temperature and relative humidity mean high water content in the air, but saturated conditions (i.e. 100 % RH) can also be reached at low temperatures. This paper analyses the relationship between temperature, dew point temperature, their difference (here called ΔT), and occurrence and time period of dew point closeness to temperature on transient condensation effects on electronics. This paper has two parts: (i) Data analysis of typical climate profiles within the different zones of the Köppen -Geiger classification to pick up conditions where ΔT is very low (for example ≤0.4 °C). Various summary statistics of these events are calculated in order to assess the temperature at which these events happen, their durations and their frequency and (ii) Empirical investigation of the effect of ΔT ≤ 0.4 °C on the reliability of electronics by mimicking an electronic device, for which the time period of the ΔT is varied in one set of experiments, and the ambient temperature is varied in the other. The effect of the packaging of the electronics is also studied in this section. The statistical study of the climate profiles shows that the transient events (ΔT ≤ 0.4 °C) occur in almost every location, at different temperature levels, with a duration of at least one observation (where observations were hourly in the database). The experimental results show that presence of the enclosure, cleanliness and bigger pitch size reduce the levels of leakage current, while similar high levels of leakage current are observed for the different durations of the transient events, indicating that these climatic transient conditions can have a big impact on the electronics reliability. • Statistical climate data analysis showed that almost all of the locations experience the transient risk of condensation. • Short durations of the transient events resulted in similar high levels of LC compared to longer durations. • The enclosure protection has buffered the humidity change experienced by the SIR PCB placed inside. • The hygroscopic nature of the weak organic acids has the highest impact on LC and ECM formation. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
18. A taxonomy of railway track maintenance planning and scheduling: A review and research trends.
- Author
-
Sedghi, Mahdieh, Kauppila, Osmo, Bergquist, Bjarne, Vanhatalo, Erik, and Kulahci, Murat
- Subjects
- *
TRAIN schedules , *RAILROADS , *SCHEDULING , *TAXONOMY - Abstract
• Developing a novel taxonomy for railway track maintenance planning and scheduling (RTMP&S) decision-making models. • Discussing the differences in planning and scheduling problems in railway maintenance. • Considering the structural characteristics of the railway track that can affect decision-making models. • Reviewing the attributes of maintenance management decisions in RTMP&S decision-making. • Summarising the optimisation frameworks for modelling the RTMP&S problems and the proposed solution approaches in the literature. • Discussing research trends can help researchers and practitioners to have a clear understanding of the state of the art of RTMP&S problems and future research directions. Railway track maintenance and renewal are vital for railway safety, train punctuality, and travel comfort. Therefore, having cost-effective maintenance is critical in managing railway infrastructure assets. There has been a considerable amount of research performed on mathematical and decision support models for improving the application of railway track maintenance planning and scheduling. This article reviews the literature in decision support models for railway track maintenance planning and scheduling and transforms the results into a problem taxonomy. Furthermore, the article discusses current approaches in optimising maintenance planning and scheduling, research trends, and possible gaps in the related decision-making models. [Display omitted] [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
19. Cost-sensitive learning classification strategy for predicting product failures.
- Author
-
Frumosu, Flavia Dalia, Khan, Abdul Rauf, Schiøler, Henrik, Kulahci, Murat, Zaki, Mohamed, and Westermann-Rasmussen, Peter
- Subjects
- *
PRODUCT failure , *LEARNING strategies , *VORONOI polygons , *CONTAINER industry , *INDUSTRIAL costs , *FEATURE selection - Abstract
• Modified cost-sensitive classification strategy for an industrial problem. • Decision rule for going through or skipping elements of last quality control stage. • Trade-off problem between cost and quality. • Modified strategy can also be applied on problems without an associated cost. In the current era of Industry 4.0, sensor data used in connection with machine learning algorithms can help manufacturing industries to reduce costs and to predict failures in advance. This paper addresses a binary classification problem found in manufacturing engineering, which focuses on how to ensure product quality delivery and at the same time to reduce production costs. The aim behind this problem is to predict the number of faulty products, which in this case is extremely low. As a result of this characteristic, the problem is reduced to an imbalanced binary classification problem. The authors contribute to imbalanced classification research in three important ways. First, the industrial application coming from the electronic manufacturing industry is presented in detail, along with its data and modelling challenges. Second, a modified cost-sensitive classification strategy based on a combination of Voronoi diagrams and genetic algorithm is applied to tackle this problem and is compared to several base classifiers. The results obtained are promising for this specific application. Third, in order to evaluate the flexibility of the strategy, and to demonstrate its wide range of applicability, 25 real-world data sets are selected from the KEEL repository with different imbalance ratios and number of features. The strategy, in this case implemented without a predefined cost, is compared with the same base classifiers as those used for the industrial problem. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.