247 results on '"Kulahci, Murat"'
Search Results
202. Time Series Data: Examples and Basic Concepts.
- Author
-
Shewhart, Walter A., Wilks, Samuel S., Bisgaard, Søren, and Kulahci, Murat
- Published
- 2011
- Full Text
- View/download PDF
203. Catalysis Of Discovery And Development In Engineering And Industry
- Author
-
Kulahci, Murat, primary and E. P. Box, George, additional
- Published
- 2003
- Full Text
- View/download PDF
204. Performance Evaluation of Dynamic Monitoring Systems: The Waterfall Chart
- Author
-
Box, George, primary, Bisgaard, Søren, additional, Graves, Spencer, additional, Kulahci, Murat, additional, Marko, Ken, additional, James, John, additional, Van Gilder, John, additional, Ting, Tom, additional, Zatorski, Hal, additional, and Wu, Cuiping, additional
- Published
- 2003
- Full Text
- View/download PDF
205. Designing simulation experiments with controllable and uncontrollable factors.
- Author
-
Dehlendorff, Christian, Kulahci, Murat, and Andersen, Klaus Kaae
- Published
- 2008
206. Improving and Controlling Business Processes
- Author
-
Bisgaard, Søren, primary and Kulahci, Murat, additional
- Published
- 2002
- Full Text
- View/download PDF
207. Switching-one-column follow-up experiments for Plackett-Burman designs
- Author
-
Bisgaard, Soren, primary and Kulahci, Murat, additional
- Published
- 2001
- Full Text
- View/download PDF
208. Quality Quandaries ROBUST PRODUCT DESIGN: SAVING TRIALS WITH SPLIT-PLOT CONFOUNDING
- Author
-
Bisgaard, Søren, primary and Kulahci, Murat, additional
- Published
- 2001
- Full Text
- View/download PDF
209. FINDING ASSIGNABLE CAUSES∗
- Author
-
Bisgaard, S⊘ren, primary and Kulahci, Murat, additional
- Published
- 2000
- Full Text
- View/download PDF
210. Use of Sparse Principal Component Analysis (SPCA) for Fault Detection
- Author
-
Gajjar, Shriram, Kulahci, Murat, and Palazoglu, Ahmet
- Abstract
Principal component analysis (PCA) has been widely used for data dimension reduction and process fault detection. However, interpreting the principal components and the outcomes of PCA-based monitoring techniques is a challenging task since each principal component is a linear combination of the original variables which can be numerous in most modern applications. To address this challenge, we first propose the use of sparse principal component analysis (SPCA) where the loadings of some variables in principal components are restricted to zero. This paper then describes a technique to determine the number of non-zero loadings in each principal component. Furthermore, we compare the performance of PCA and SPCA in fault detection. The validity and potential of SPCA are demonstrated through simulated data and a comparative study with the benchmark Tennessee Eastman process.
- Published
- 2016
- Full Text
- View/download PDF
211. Discussion of “The Statistical Evaluation of Categorical Measurements: ‘Simple Scales, but Treacherous Complexity Underneath’”.
- Author
-
Kulahci, Murat
- Subjects
QUANTITATIVE research ,MATHEMATICAL category theory ,COHEN'S kappa coefficient (Statistics) ,DATABASES - Abstract
The article offers the author's insights on the statistical analysis of categorical data. The author discusses the usage of kappa statistic in agreement studies based on the comprehensive review of Professor Jeoren de Mast and colleagues. The author mentions that de Mast also presents the confusing behavior of using kappa statistic in fictitious datasets.
- Published
- 2014
- Full Text
- View/download PDF
212. Designing simulation experiments with controllable and uncontrollable factors for applications in healthcare.
- Author
-
Dehlendorff, Christian, Kulahci, Murat, and Andersen, Klaus Kaae
- Subjects
SIMULATION methods & models ,BASIS (Information retrieval system) ,MEDICAL care ,COMPUTER programming ,ELECTRONIC data processing ,HOSPITALS - Abstract
We propose a new methodology for designing computer experiments that was inspired by the split-plot designs that are often used in physical experimentation. The methodology has been developed for a simulation model of a surgical unit in a Danish hospital. We classify the factors as controllable and uncontrollable on the basis of their characteristics in the physical system. The experiments are designed so that, for a given setting of the controllable factors, the various settings of the uncontrollable factors cover the design space uniformly. Moreover the methodology allows for overall uniform coverage in the combined design when all settings of the uncontrollable factors are considered at once. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
213. Steepest-Ascent Constrained Simultaneous Perturbation for Multiobjective Optimization.
- Author
-
McClary, Daneil W., Syrotiuk, Violet R., and Kulahci, Murat
- Subjects
STOCHASTIC approximation ,AD hoc computer networks ,MULTIPLE criteria decision making ,PERTURBATION theory ,CONJUGATE gradient methods - Abstract
The simultaneous optimization of multiple responses in a dynamic system is challenging. When a response has a known gradient, it is often easily improved along the path of steepest ascent. On the contrary a stochastic approximation technique may be used when the gradient is unknown or costly to obtain. We consider the problem of optimizing multiple responses in which the gradient is known for only one response. We propose a hybrid approach for this problem, called simultaneous perturbation stochastic approximation steepest ascent, SPSA-SA or SP(SA)2 for short. SP(SA)2 is an SPSA technique that leverages information about the known gradient to constrain the perturbations used to approximate the others. We apply SP(SA)2 to the cross-layer optimization of throughput, packet loss, and end-to-end delay in a mobile ad hoc network (MANET), a self- organizing wireless network. The results show that SP(SA)2 achieves higher throughput and lower packet loss and end-to-end delay than the steepest ascent, SPSA, and the Nelder-Mead stochastic approximation approaches. It also reduces the cost in the number of iterations to perform the optimization. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
214. Profile-Driven Regression for Modeling and Runtime Optimization of Mobile Networks.
- Author
-
McCLARY, DANIEL W., SYROTIUK, VIOLET R., and KULAHCI, MURAT
- Subjects
COMPUTER networks ,REGRESSION analysis ,INFORMATION technology ,DATA transmission systems ,NETWORK PC (Computer) - Abstract
Computer networks often display nonlinear behavior when examined over a wide range of operating conditions. There are few strategies available for modeling such behavior and optimizing such systems as they run. Profile-driven regression is developed and applied to modeling and runtime optimization of throughput in a mobile ad hoc network, a self-organizing collection of mobile wireless nodes without any fixed infrastructure. The intermediate models generated in profile-driven regression are used to fit an overall model of throughput, and are also used to optimize controllable factors at runtime. Unlike others, the throughput model accounts for node speed. The resulting optimization is very effective; locally optimizing the network factors at runtime results in throughput as much as six times higher than that achieved with the factors at their default levels. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
215. Analysis of signal–response systems using generalized linear mixed models.
- Author
-
Gupta, Shilpa, Kulahci, Murat, Montgomery, Douglas C., and Borror, Connie M.
- Subjects
- *
SIX Sigma , *INDUSTRIAL design , *QUALITY control standards , *VARIANCES , *MATHEMATICAL models - Abstract
Robust parameter design is one of the important tools used in Design for Six Sigma. In this article, we present an application of the generalized linear mixed model (GLMM) approach to robust design and analysis of signal–response systems. We propose a split-plot approach to the signal–response system characterized by two variance components—within-profile variance and between-profile variance. We demonstrate that explicit modeling of variance components using GLMMs leads to more precise point estimates of important model coefficients with shorter confidence intervals. Copyright © 2009 John Wiley & Sons, Ltd. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
216. Analysis of computer experiments with multiple noise sources.
- Author
-
Dehlendorff, Christian, Kulahci, Murat, and Andersen, Klaus K.
- Subjects
- *
NOISE , *EXPERIMENTS , *MATHEMATICAL models , *ORTHOPEDICS - Abstract
In this paper we present a modeling framework for analyzing computer models with two types of variations. The paper is based on a case study of an orthopedic surgical unit, which has both controllable and uncontrollable factors. Our results show that this structure of variation can be modeled effectively with linear mixed effects models and generalized additive models. Copyright © 2009 John Wiley & Sons, Ltd. [ABSTRACT FROM AUTHOR]
- Published
- 2010
- Full Text
- View/download PDF
217. Checking the Adequacy of Fit of Models from Split-Plot Designs.
- Author
-
Almini, Ashraf A., Kulahci, Murat, and Montgomery, Douglas C.
- Subjects
EXPERIMENTAL mathematics ,MEASUREMENT errors ,PREDICTION models ,NUMERICAL calculations ,GRAPHICAL modeling (Statistics) ,GRAPHIC methods - Abstract
One of the main features that distinguish split-plot experiments from other experiments is that they involve two types of experimental errors: the whole-plot (WP) error and the subplot (SP) error. Taking this into consideration is very important when computing measures of adequacy of fit for split-plot models. In this article, we propose the computation of two R[sup2], R[sup2]-adjusted, prediction error sums of squares (PRESS), and R[sup2]-prediction statistics to measure the adequacy of fit for the WP and the SP submodels in a split- plot design. This is complemented with the graphical analysis of the two types of errors to check for any violation of the underlying assumptions and the adequacy of fit of split-plot models. Using examples, we show how computing two measures of model adequacy of fit for each split-plot design model is appropriate and useful as they reveal whether the correct WP and SP effects have been included in the model and describe the predictive performance of each group of effects. [ABSTRACT FROM AUTHOR]
- Published
- 2009
- Full Text
- View/download PDF
218. Estimation of missing observations in two-level split-plot designs.
- Author
-
Almimi, Ashraf A., Kulahci, Murat, and Montgomery, Douglas C.
- Subjects
- *
QUANTITATIVE research , *MISSING data (Statistics) , *ESTIMATION theory , *MULTIVARIATE analysis , *MULTIPLE imputation (Statistics) - Abstract
Inserting estimates for the missing observations from split-plot designs restores their balanced or orthogonal structure and alleviates the difficulties in the statistical analysis. In this article, we extend a method due to Draper and Stoneman to estimate the missing observations from unreplicated two-level factorial and fractional factorial split-plot (FSP and FFSP) designs. The missing observations, which can either be from the same whole plot, from different whole plots, or comprise entire whole plots, are estimated by equating to zero a number of specific contrast columns equal to the number of the missing observations. These estimates are inserted into the design table and the estimates for the remaining effects (or alias chains of effects as the case with FFSP designs) are plotted on two half-normal plots: one for the whole-plot effects and the other for the subplot effects. If the smaller effects do not point at the origin, then different contrast columns to some or all of the initial ones should be discarded and the plots re-examined for bias. Using examples, we show how the method provides estimates for the missing observations that are very close to their actual values. Copyright © 2007 John Wiley & Sons, Ltd. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
219. Quality Quandaries : Using a Time Series Model for Process Adjustment and Control.
- Author
-
Bisgaard, Søren and Kulahci, Murat
- Subjects
MANUFACTURING processes ,INDUSTRIAL engineering ,PRODUCTION engineering ,TIME series analysis ,PROCESS control systems ,QUALITY control ,CHEMICAL industry - Abstract
The article demonstrates how to characterize the behavior of a chemical manufacturing process using a time series model to control and adjust the manufacturing process. The authors discuss a different interpretation of the word control, and consider the situation where the process is adjusted whenever data coming from the process indicates that the process is too far from a given target value. They describe the idea behind time series control of a process and proceed more intuitively and use the chemical process as an illustrative example.
- Published
- 2008
- Full Text
- View/download PDF
220. Blocking Two-level Factorial Experiments.
- Author
-
Kulahci, Murat
- Subjects
- *
EXPERIMENTAL design , *FACTORIALS , *INDUSTRIAL design , *KRONECKER products - Abstract
Blocking is commonly used in experimental design to eliminate unwanted variation by creating more homogeneous conditions for experimental treatments within each block. While it has been a standard practice in experimental design, blocking fractional factorials still presents many challenges due to differences between treatment and blocking variables. Lately, new design criteria such as the total number of clear effects and fractional resolution have been proposed to design blocked two-level fractional factorial experiments. This article presents a flexible matrix representation for two-level fractional factorials that will allow experimenters and software developers to block such experiments based on any design criterion that is suitable with the experimental conditions. Copyright © 2006 John Wiley & Sons, Ltd. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
221. A generalization of the alias matrix.
- Author
-
Kulahci, Murat and Bisgaard, Søren
- Subjects
- *
MATHEMATICAL statistics , *STATISTICAL correlation , *FACTORIAL experiment designs , *FACTOR analysis , *MATRIX derivatives , *NUMBER theory , *ESTIMATION theory - Abstract
The investigation of aliases or biases is important for the interpretation of the results from factorial experiments. For two-level fractional factorials this can be facilitated through their group structure. For more general arrays the alias matrix can be used. This tool is traditionally based on the assumption that the error structure is that associated with ordinary least squares. For situations where that is not the case, we provide in this article a generalization of the alias matrix applicable under the generalized least squares assumptions. We also show that for the special case of split plot error structure, the generalized alias matrix simplifies to the ordinary alias matrix. [ABSTRACT FROM AUTHOR]
- Published
- 2006
- Full Text
- View/download PDF
222. The Use of Plackett--Burman Designs to Construct Split-Plot Designs.
- Author
-
Kulahci, Murat and Bisgaard, Søren
- Subjects
- *
EXPERIMENTAL design , *FACTOR analysis , *STATISTICAL decision making , *STATISTICAL hypothesis testing , *STATISTICAL correlation , *FACTOR structure - Abstract
When some factors are hard to change and others are relatively easier, split-plot experiments are often an economic alternative to fully randomized designs. Split-plot experiments, with their structure of subplot arrays imbedded within whole-plot arrays, have a tendency to become large, particularly in screening situations when many factors are considered. To alleviate this problem, we explore, for the case of two-level designs, various ways to use orthogonal arrays of the Plackett-Burman type to reduce the number of individual tests. General construction principles are outlined, and the resulting alias structure is derived and discussed. [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
223. A climate classification for corrosion control in electronic system design
- Author
-
Spooner, Max, Ambat, Rajan, Conseil-Gudla, Hélène, and Kulahci, Murat
- Abstract
Climate factors such as humidity and temperature have a significant impact on the corrosion reliability of electronic products. Given the huge geographical variability in climate conditions globally, a climate classification is a useful tool that simplifies the problem of considering climate when designing electronics packaging. Most current guidelines for electronic product design rely on the Köppen–Geiger classification first developed by Köppen over a century ago. Köppen devised a set of heuristics to separate climates to match different vegetation types. These climate classes are unlikely to be the optimal for electronic product design. This paper presents a new climate classification using parameters important for corrosion reliability of electronics. The classification is based on real climate data measured every 3 h during a 5-year period at over 9000 locations globally. A key step is defining relevant features of climate affecting corrosion in electronics. Features related to temperature are defined, but also the amount of time that the difference between Temperature and Dew Point is less than 1, 2 or 3 ℃. These features relate to the risk of condensation in electronic products. The features are defined such that diurnal, seasonal and yearly variation is taken into account. The locations are then clustered using K-means clustering to obtain the relevant climate classes. This data-driven classification, based on key features for corrosion reliability of electronics, will be a useful aid for product design, reliability testing and lifetime estimation.
- Published
- 2022
- Full Text
- View/download PDF
224. Performance Evaluation of Dynamic Monitoring Systems: The Waterfall Chart.
- Author
-
Box, George, Bisgaard, Søren, Graves, Spencer, Kulahci, Murat, Marko, Ken, James, John, Van Gilder, John, Ting, Tom, Zatorski, Hal, and Wu, Cuiping
- Subjects
MANUFACTURING processes ,QUALITY control ,CUSUM technique ,MATHEMATICAL statistics ,COMPUTER systems - Abstract
Computers are increasingly employed to monitor the performance of complex systems. An important issue is how to evaluate the performance of such monitors. In this article we introduce a three-dimensional representation that we call a “waterfall chart” of the probability of an alarm as a function of time and the condition of the system. It combines and shows the conceptual relationship between the cumulative distribution function of the run length and the power function. The value of this tool is illustrated with an application to Page's one-sided Cusum algorithm. However, it can be applied in general for any monitoring system. [ABSTRACT FROM AUTHOR]
- Published
- 2004
- Full Text
- View/download PDF
225. Improving and Controlling Business Processes.
- Author
-
Bisgaard, Søren and Kulahci, Murat
- Subjects
SALES forecasting ,STATISTICAL process control - Abstract
* Edited by George Box and Søren Bisgaard [ABSTRACT FROM AUTHOR]- Published
- 2001
- Full Text
- View/download PDF
226. An extended Tennessee Eastman simulation dataset for fault-detection and decision support systems.
- Author
-
Reinartz, Christopher, Kulahci, Murat, and Ravn, Ole
- Subjects
- *
DECISION support systems , *QUALITY control charts , *PRINCIPAL components analysis , *CUSUM technique , *CHEMICAL engineers , *CHEMICAL engineering , *DYNAMIC simulation - Abstract
• Simulation of 28 process faults at different fault-magnitudes for six operating modes. • Simulation of the process' reaction to changes of the control setpoints. • Healthy simulation data for other configurations than the original six process modes. • Simulation of dynamic transitions between operating modes with different transition times. • Performance benchmark for statistical fault-detection schemes for the presented dataset. The Tennessee Eastman Process (TEP) is a frequently used benchmark in chemical engineering research. An extended simulator, published in 2015, enables a more in-depth investigation of TEP, featuring additional, scalable process disturbances as well as an extended list of variables. Even though the simulator has been used multiple times since its release, the lack of a standardized reference dataset impedes direct comparisons of methods. In this contribution we present an extensive reference dataset, incorporating repeat simulations of healthy and faulty process data, additional measurements and multiple magnitudes for all process disturbances. All six production modes of TEP as well as mode transitions and operating points in a region around the modes are simulated. We further perform fault-detection based on principal component analysis combined with T 2 and Q charts using average run length as a performance metric to provide an initial benchmark for statistical process monitoring schemes for the presented data. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
227. Artificial Intelligence in Pharmacoepidemiology: A Systematic Review. Part 2–Comparison of the Performance of Artificial Intelligence and Traditional Pharmacoepidemiological Techniques.
- Author
-
Sessa, Maurizio, Liang, David, Khan, Abdul Rauf, Kulahci, Murat, and Andersen, Morten
- Subjects
ARTIFICIAL intelligence ,ARTIFICIAL neural networks ,PHARMACOGENOMICS ,DRUG side effects ,PHARMACOEPIDEMIOLOGY ,LENGTH of stay in hospitals ,DRUG utilization - Abstract
Aim: To summarize the evidence on the performance of artificial intelligence vs. traditional pharmacoepidemiological techniques. Methods: Ovid MEDLINE (01/1950 to 05/2019) was searched to identify observational studies, meta-analyses, and clinical trials using artificial intelligence techniques having a drug as the exposure or the outcome of the study. Only studies with an available full text in the English language were evaluated. Results: In all, 72 original articles and five reviews were identified via Ovid MEDLINE of which 19 (26.4%) compared the performance of artificial intelligence techniques with traditional pharmacoepidemiological methods. In total, 44 comparisons have been performed in articles that aimed at 1) predicting the needed dosage given the patient's characteristics (31.8%), 2) predicting the clinical response following a pharmacological treatment (29.5%), 3) predicting the occurrence/severity of adverse drug reactions (20.5%), 4) predicting the propensity score (9.1%), 5) identifying subpopulation more at risk of drug inefficacy (4.5%), 6) predicting drug consumption (2.3%), and 7) predicting drug-induced lengths of stay in hospital (2.3%). In 22 out of 44 (50.0%) comparisons, artificial intelligence performed better than traditional pharmacoepidemiological techniques. Random forest (seven out of 11 comparisons; 63.6%) and artificial neural network (six out of 10 comparisons; 60.0%) were the techniques that in most of the comparisons outperformed traditional pharmacoepidemiological methods. Conclusion: Only a small fraction of articles compared the performance of artificial intelligence techniques with traditional pharmacoepidemiological methods and not all artificial intelligence techniques have been compared in a Pharmacoepidemiological setting. However, in 50% of comparisons, artificial intelligence performed better than pharmacoepidemiological techniques. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
228. Surveillance of Antidepressant Safety (SADS): Active Signal Detection of Serious Medical Events Following SSRI and SNRI Initiation Using Big Healthcare Data.
- Author
-
Aakjær, Mia, De Bruin, Marie Louise, Kulahci, Murat, and Andersen, Morten
- Subjects
- *
MEDICATION safety , *DRUG side effects , *SEROTONIN uptake inhibitors , *NEUROTRANSMITTER uptake inhibitors - Abstract
Introduction: The current process for generating evidence in pharmacovigilance has several limitations, which often lead to delays in the evaluation of drug-associated risks. Objectives: In this study, we proposed and tested a near real-time epidemiological surveillance system using sequential, cumulative analyses focusing on the detection and preliminary risk quantification of potential safety signals following initiation of selective serotonin reuptake inhibitors (SSRIs) and serotonin-norepinephrine reuptake inhibitors (SNRIs). Methods: We emulated an active surveillance system in an historical setting by conducting repeated annual cohort studies using nationwide Danish healthcare data (1996–2016). Outcomes were selected from the European Medicines Agency's Designated Medical Event list, summaries of product characteristics, and the literature. We followed patients for a maximum of 6 months from treatment initiation to the event of interest or censoring. We performed Cox regression analyses adjusted for standard sets of covariates. Potential safety signals were visualized using heat maps and cumulative hazard ratio (HR) plots over time. Results: In the total study population, 969,667 new users were included and followed for 461,506 person-years. We detected potential safety signals with incidence rates as low as 0.9 per 10,000 person-years. Having eight different exposure drugs and 51 medical events, we identified 31 unique combinations of potential safety signals with a positive association to the event of interest in the exposed group. We proposed that these signals were designated for further evaluation once they appeared in a prospective setting. In total, 21 (67.7%) of these were not present in the current summaries of product characteristics. Conclusion: The study demonstrated the feasibility of performing epidemiological surveillance using sequential, cumulative analyses. Larger populations are needed to evaluate rare events and infrequently used antidepressants. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
229. Condition monitoring of wind turbine faults: Modeling and savings.
- Author
-
Hansen, Henrik Hviid, MacDougall, Neil, Jensen, Christopher Dam, Kulahci, Murat, and Nielsen, Bo Friis
- Subjects
- *
WIND turbines , *MONITORING of machinery , *TURBINE generators , *STATISTICAL process control , *COST structure , *MOVING average process , *LEAD time (Supply chain management) - Abstract
This paper presents a case study on condition monitoring of power generators at offshore wind turbines. Two fault detection models are proposed for detecting sudden changes in the sensed value of metallic debris at the generator. The first model uses an exponentially weighted moving average, while the second monitors first-order derivatives using a fixed threshold. This is expected to improve the maintenance activities by avoiding late or early part replacement. The economic impact of the proposed approach is also provided with a realistic depiction of the cost structure associated with the corresponding maintenance plan. While the specifics of the case study are supported by real-life data, considering the prevalence of the use of generators not only in offshore wind turbines but also in other production environments, we believe the case study covered in this paper can be used as a blueprint for similar studies in other applications. • Large scale condition monitoring demonstrates economic benefits. • Case study from real wind turbine population with economic impact assessment. • Sensor for monitoring fault mode results in lead time for maintenance planning. • Sensitivity analysis assesses results despite downtime and power price uncertainty. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
230. Index.
- Author
-
Shewhart, Walter A., Wilks, Samuel S., Bisgaard, Søren, and Kulahci, Murat
- Published
- 2011
- Full Text
- View/download PDF
231. Harvest time prediction for batch processes.
- Author
-
Spooner, Max, Kold, David, and Kulahci, Murat
- Subjects
- *
BACTERIA , *FERMENTATION , *REGRESSION analysis , *NUMERICAL analysis , *MATHEMATICAL analysis - Abstract
Batch processes usually exhibit variation in the time at which individual batches are stopped (referred to as the harvest time). Harvest time is based on the occurrence of some criterion and there may be great uncertainty as to when this criterion will be satisfied. This uncertainty increases the difficulty of scheduling downstream operations and results in fewer completed batches per day. A real case study is presented of a bacteria fermentation process. We consider the problem of predicting the harvest time of a batch in advance to reduce variation and improving batch quality. Lasso regression is used to obtain an interpretable model for predicting the harvest time at an early stage in the batch. A novel method for updating the harvest time predictions as a batch progresses is presented, based on information obtained from online alignment using dynamic time warping. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
232. Guest Editorial.
- Author
-
Kulahci, Murat
- Subjects
QUALITY control standards ,INFORMATION technology management standards - Abstract
An introduction to the journal is presented in which the guest editor discuss contributions of various researchers and practitioners to the Second Stu Hunter Conference held in Tempe, Arizona.
- Published
- 2015
- Full Text
- View/download PDF
233. Data Mining-A Special Issue of Quality and Reliability Engineering International ( QREI).
- Author
-
Li, Jing and Kulahci, Murat
- Subjects
- *
DATA mining , *ENGINEERING , *BIOINFORMATICS - Abstract
The article calls for the submission of papers on data mining technologies and their applications in various domains sucha as engineering, health care, bioinformatics, social sciences and finance.
- Published
- 2013
- Full Text
- View/download PDF
234. Exploring the Use of Design of Experiments in Industrial Processes Operating Under Closed-Loop Control.
- Author
-
Capaci, Francesca, Bergquist, Bjarne, Kulahci, Murat, and Vanhatalo, Erik
- Subjects
- *
MANUFACTURING processes , *MACHINE tools , *CLOSED loop systems , *FEEDBACK control systems , *EXPERIMENTAL design - Abstract
Industrial manufacturing processes often operate under closed-loop control, where automation aims to keep important process variables at their set-points. In process industries such as pulp, paper, chemical and steel plants, it is often hard to find production processes operating in open loop. Instead, closed-loop control systems will actively attempt to minimize the impact of process disturbances. However, we argue that an implicit assumption in most experimental investigations is that the studied system is open loop, allowing the experimental factors to freely affect the important system responses. This scenario is typically not found in process industries. The purpose of this article is therefore to explore issues of experimental design and analysis in processes operating under closed-loop control and to illustrate how Design of Experiments can help in improving and optimizing such processes. The Tennessee Eastman challenge process simulator is used as a test-bed to highlight two experimental scenarios. The first scenario explores the impact of experimental factors that may be considered as disturbances in the closed-loop system. The second scenario exemplifies a screening design using the set-points of controllers as experimental factors. We provide examples of how to analyze the two scenarios. © 2017 The Authors Quality and Reliability Engineering International Published by John Wiley & Sons Ltd [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
235. Selecting local constraint for alignment of batch process data with dynamic time warping.
- Author
-
Spooner, Max, Kold, David, and Kulahci, Murat
- Subjects
- *
BATCH processing , *TIME series analysis , *DATA analysis , *DYNAMIC programming , *ALGORITHMS - Abstract
There are two key reasons for aligning batch process data. The first is to obtain same-length batches so that standard methods of analysis may be applied, whilst the second reason is to synchronise events that take place during each batch so that the same event is associated with the same observation number for every batch. Dynamic time warping has been shown to be an effective method for meeting these objectives. This is based on a dynamic programming algorithm that aligns a batch to a reference batch, by stretching and compressing its local time dimension. The resulting ”warping function” may be interpreted as a progress signature of the batch which may be appended to the aligned data for further analysis. For the warping function to be a realistic reflection of the progress of a batch, it is necessary to impose some constraints on the dynamic time warping algorithm, to avoid an alignment which is too aggressive and which contains pathological warping. Previous work has focused on addressing this issue using global constraints. In this work, we investigate the use of local constraints in dynamic time warping and define criteria for evaluating the degree of time distortion and variable synchronisation obtained. A local constraint scheme is extended to include constraints not previously considered, and a novel method for selecting the optimal local constraint with respect to the two criteria is proposed. For illustration, the method is applied to real data from an industrial bacteria fermentation process. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
236. iCFD: Interpreted Computational Fluid Dynamics – Degeneration of CFD to one-dimensional advection-dispersion models using statistical experimental design – The secondary clarifier.
- Author
-
Guyonvarch, Estelle, Ramin, Elham, Kulahci, Murat, and Plósz, Benedek Gy
- Subjects
- *
COMPUTATIONAL fluid dynamics , *ADVECTION-diffusion equations , *BOUNDARY value problems , *EXPERIMENTAL design , *LATIN hypercube sampling - Abstract
The present study aims at using statistically designed computational fluid dynamics (CFD) simulations as numerical experiments for the identification of one-dimensional (1-D) advection-dispersion models – computationally light tools, used e.g., as sub-models in systems analysis. The objective is to develop a new 1-D framework, referred to as interpreted CFD (iCFD) models, in which statistical meta-models are used to calculate the pseudo-dispersion coefficient ( D ) as a function of design and flow boundary conditions. The method – presented in a straightforward and transparent way – is illustrated using the example of a circular secondary settling tank (SST). First, the significant design and flow factors are screened out by applying the statistical method of two-level fractional factorial design of experiments. Second, based on the number of significant factors identified through the factor screening study and system understanding, 50 different sets of design and flow conditions are selected using Latin Hypercube Sampling (LHS). The boundary condition sets are imposed on a 2-D axi-symmetrical CFD simulation model of the SST. In the framework, to degenerate the 2-D model structure, CFD model outputs are approximated by the 1-D model through the calibration of three different model structures for D . Correlation equations for the D parameter then are identified as a function of the selected design and flow boundary conditions (meta-models), and their accuracy is evaluated against D values estimated in each numerical experiment. The evaluation and validation of the iCFD model structure is carried out using scenario simulation results obtained with parameters sampled from the corners of the LHS experimental region. For the studied SST, additional iCFD model development was carried out in terms of (i) assessing different density current sub-models; (ii) implementation of a combined flocculation, hindered, transient and compression settling velocity function; and (iii) assessment of modelling the onset of transient and compression settling. Furthermore, the optimal level of model discretization both in 2-D and 1-D was undertaken. Results suggest that the iCFD model developed for the SST through the proposed methodology is able to predict solid distribution with high accuracy – taking a reasonable computational effort – when compared to multi-dimensional numerical experiments, under a wide range of flow and design conditions. iCFD tools could play a crucial role in reliably predicting systems' performance under normal and shock events. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
237. Transient risk of water layer formation on PCBAs in different climates: Climate data analysis and experimental study.
- Author
-
Conseil-Gudla, Helene, Spooner, Max, Kulahci, Murat, and Ambat, Rajan
- Subjects
- *
DEW point , *RELIABILITY of electronics , *DATA analysis , *ELECTRONIC packaging , *HUMIDITY , *STRAY currents - Abstract
The reliability of electronic devices depends on the environmental loads at which they are exposed. Climatic conditions vary greatly from one geographical location to another (from hot and humid to cold and dry areas), and the temperature and humidity vary from season to season and from day to day. High levels of temperature and relative humidity mean high water content in the air, but saturated conditions (i.e. 100 % RH) can also be reached at low temperatures. This paper analyses the relationship between temperature, dew point temperature, their difference (here called ΔT), and occurrence and time period of dew point closeness to temperature on transient condensation effects on electronics. This paper has two parts: (i) Data analysis of typical climate profiles within the different zones of the Köppen -Geiger classification to pick up conditions where ΔT is very low (for example ≤0.4 °C). Various summary statistics of these events are calculated in order to assess the temperature at which these events happen, their durations and their frequency and (ii) Empirical investigation of the effect of ΔT ≤ 0.4 °C on the reliability of electronics by mimicking an electronic device, for which the time period of the ΔT is varied in one set of experiments, and the ambient temperature is varied in the other. The effect of the packaging of the electronics is also studied in this section. The statistical study of the climate profiles shows that the transient events (ΔT ≤ 0.4 °C) occur in almost every location, at different temperature levels, with a duration of at least one observation (where observations were hourly in the database). The experimental results show that presence of the enclosure, cleanliness and bigger pitch size reduce the levels of leakage current, while similar high levels of leakage current are observed for the different durations of the transient events, indicating that these climatic transient conditions can have a big impact on the electronics reliability. • Statistical climate data analysis showed that almost all of the locations experience the transient risk of condensation. • Short durations of the transient events resulted in similar high levels of LC compared to longer durations. • The enclosure protection has buffered the humidity change experienced by the SIR PCB placed inside. • The hygroscopic nature of the weak organic acids has the highest impact on LC and ECM formation. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
238. An easy to use GUI for simulating big data using Tennessee Eastman process.
- Author
-
Andersen, Emil B., Udugama, Isuru A., Gernaey, Krist V., Khan, Abdul R., Bayer, Christoph, and Kulahci, Murat
- Subjects
- *
CHEMICAL processes , *GRAPHICAL user interfaces , *MANUFACTURING processes , *INDUSTRY 4.0 , *FAULT diagnosis , *ELECTRONIC data processing , *BIG data , *INTERNET of things - Abstract
Data‐driven process monitoring and control techniques and their application to industrial chemical processes are gaining popularity due to the current focus on Industry 4.0, digitalization and the Internet of Things. However, for the development of such techniques, there are significant barriers that must be overcome in obtaining sufficiently large and reliable datasets. As a result, the use of real plant and process data in developing and testing data‐driven process monitoring and control tools can be difficult without investing significant efforts in acquiring, treating, and interpreting the data. Therefore, researchers need a tool that effortlessly generates large amounts of realistic and reliable process data without the requirement for additional data treatment or interpretation. In this work, we propose a data generation platform based on the Tennessee Eastman Process simulation benchmark. A graphical user interface (GUI) developed in MATLAB Simulink is presented that enables users to generate massive amounts of data for testing applicability of big data concepts in the realm of process control for continuous time‐dependent processes. An R‐Shiny app that interacts with the data generation tool is also presented for illustration purposes. The app can visualize the results generated by the Tennessee Eastman Process and can carry out a standard fault detection and diagnosis studies based on PCA. The data generator GUI is available free of charge for research purposes at https://github.com/dtuprodana/TEP. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
239. On monitoring industrial processes under feedback control.
- Author
-
Capaci, Francesca, Vanhatalo, Erik, Palazoglu, Ahmet, Bergquist, Bjarne, and Kulahci, Murat
- Subjects
- *
MANUFACTURING processes , *STATISTICAL process control , *QUALITY control charts , *PRODUCTION engineering , *PSYCHOLOGICAL feedback - Abstract
The concurrent use of statistical process control and engineering process control involves monitoring manipulated and controlled variables. One multivariate control chart may handle the statistical monitoring of all variables, but observing the manipulated and controlled variables in separate control charts may improve understanding of how disturbances and the controller performance affect the process. In this article, we illustrate how step and ramp disturbances manifest themselves in a single‐input–single‐output system by studying their resulting signatures in the controlled and manipulated variables. The system is controlled by variations of the widely used proportional‐integral‐derivative(PID) control scheme. Implications for applying control charts for these scenarios are discussed. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
240. An extension of PARAFAC to analyze multi-group three-way data.
- Author
-
Rotari, Marta, Diaz, Valeria Fonseca, De Ketelaere, Bart, and Kulahci, Murat
- Subjects
- *
IMAGE analysis , *CONSOLIDATED financial statements - Abstract
This paper introduces a novel methodology for analyzing three-way array data with a multi-group structure. Three-way arrays are commonly observed in various domains, including image analysis, chemometrics, and real-world applications. In this paper, we use a practical case study of process modeling in additive manufacturing, where batches are structured according to multiple groups. Vast volumes of data for multiple variables and process stages are recorded by sensors installed on the production line for each batch. For these three-way arrays, the link between the final product and the observations creates a grouping structure in the observations. This grouping may hamper gaining insight into the process if only some of the groups dominate the controlled variability of the products. In this study, we develop an extension of the PARAFAC model that takes into account the grouping structure of three-way data sets. With this extension, it is possible to estimate a model that is representative of all the groups simultaneously by finding their common structure. The proposed model has been applied to three simulation data sets and a real manufacturing case study. The capability to find the common structure of the groups is compared to PARAFAC and the insights into the importance of variables delivered by the models are discussed. • A new methodology for the analysis of multi-group three-way datasets. • The model is an extension of the PARAFAC model that considers the multi-group structure of the data. • The model delivers loadings coefficients that account for the common variability of all the groups present in the data. • Simulations and a real-case study were used to showcase the aspects of the proposed model. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
241. Monitoring pig movement at the slaughterhouse using optical flow and modified angular histograms.
- Author
-
Gronskyte, Ruta, Clemmensen, Line Harder, Hviid, Marchen Sonja, and Kulahci, Murat
- Subjects
- *
OPTICAL flow , *HISTOGRAMS , *SWINE behavior , *ANIMAL herds , *SLAUGHTERING , *QUANTITATIVE research - Abstract
We analyse the movement of pig herds through video recordings at a slaughterhouse by using statistical analysis of optical flow (OF) patterns. Unlike the previous attempts to analyse pig movement, no markers, trackers nor identification of individual pigs are needed. Our method handles the analysis of unconstrained areas where pigs are constantly entering and leaving. The goal is to improve animal welfare by real-time prediction of abnormal behaviour through proper interventions. The aim of this study is to identify any stationary pig, which can be an indicator of an injury or an obstacle. In this study, we use the OF vectors to describe points of movement on all pigs and thereby analyse the herd movement. Subsequently, the OF vectors are used to identify abnormal movements of individual pigs. The OF vectors, obtained from the pigs, point in multiple directions rather than in one movement direction. To accommodate the multiple directions of the OF vectors, we propose to quantify OF using a summation of the vectors into bins according to their angles, which we call modified angular histograms. Sequential feature selection is used to select angle ranges, which identify pigs that are moving abnormally in the herd. The vector lengths from the selected angle ranges are compared to the corresponding median, 25th and 75th percentiles from a training set, which contains only normally moving pigs. We show that the method is capable of locating stationary pigs in the recordings regardless of the number of pigs in the frame. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
242. Pig herd monitoring and undesirable tripping and stepping prevention.
- Author
-
Gronskyte, Ruta, Clemmensen, Line Harder, Hviid, Marchen Sonja, and Kulahci, Murat
- Subjects
- *
SWINE behavior , *SLAUGHTERING , *MEAT quality , *MODERN society , *ANIMAL welfare - Abstract
Humane handling and slaughter of livestock are of major concern in modern societies. Monitoring animal wellbeing in slaughterhouses is critical in preventing unnecessary stress and physical damage to livestock, which can also affect the meat quality. The goal of this study is to monitor pig herds at the slaughterhouse and identify undesirable events such as pigs tripping or stepping on each other. In this paper, we monitor pig behavior in color videos recorded during unloading from transportation trucks. We monitor the movement of a pig herd where the pigs enter and leave a surveyed area. The method is based on optical flow, which is not well explored for monitoring all types of animals, but is the method of choice for human crowd monitoring. We recommend using modified angular histograms to summarize the optical flow vectors. We show that the classification rate based on support vector machines is 93% of all frames. The sensitivity of the model is 93.5% with 90% specificity and 6.5% false alarm rate. The radial lens distortion and camera position required for convenient surveillance make the recordings highly distorted. Therefore, we also propose a new approach to correct lens and foreshortening distortions by using moving reference points. The method can be applied real-time during the actual unloading operations of pigs. In addition, we present a method for identification of the causes leading to undesirable events, which currently only runs off-line. The comparative analysis of three drivers, which performed the unloading of the pigs from the trucks in the available datasets, indicates that the drivers perform significantly differently. Driver 1 has 2.95 times higher odds to have pigs tripping and stepping on each other than the two others, and Driver 2 has 1.11 times higher odds than Driver 3. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
243. In vivo Comet assay – statistical analysis and power calculations of mice testicular cells.
- Author
-
Hansen, Merete Kjær, Sharma, Anoop Kumar, Dybdahl, Marianne, Boberg, Julie, and Kulahci, Murat
- Subjects
- *
CELL analysis , *DNA damage , *LABORATORY mice , *DATA analysis , *GEL electrophoresis , *GERM cells , *MULTILEVEL models - Abstract
The in vivo Comet assay is a sensitive method for evaluating DNA damage. A recurrent concern is how to analyze the data appropriately and efficiently. A popular approach is to summarize the raw data into a summary statistic prior to the statistical analysis. However, consensus on which summary statistic to use has yet to be reached. Another important consideration concerns the assessment of proper sample sizes in the design of Comet assay studies. This study aims to identify a statistic suitably summarizing the % tail DNA of mice testicular samples in Comet assay studies. A second aim is to provide curves for this statistic outlining the number of animals and gels to use. The current study was based on 11 compounds administered via oral gavage in three doses to male mice: CAS no. 110-26-9, CAS no. 512-56-1, CAS no. 111873-33-7, CAS no. 79-94-7, CAS no. 115-96-8, CAS no. 598-55-0, CAS no. 636-97-5, CAS no. 85-28-9, CAS no. 13674-87-8, CAS no. 43100-38-5 and CAS no. 60965-26-6. Testicular cells were examined using the alkaline version of the Comet assay and the DNA damage was quantified as % tail DNA using a fully automatic scoring system. From the raw data 23 summary statistics were examined. A linear mixed-effects model was fitted to the summarized data and the estimated variance components were used to generate power curves as a function of sample size. The statistic that most appropriately summarized the within-sample distributions was the median of the log-transformed data, as it most consistently conformed to the assumptions of the statistical model. Power curves for 1.5-, 2-, and 2.5-fold changes of the highest dose group compared to the control group when 50 and 100 cells were scored per gel are provided to aid in the design of future Comet assay studies on testicular cells. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
244. A taxonomy of railway track maintenance planning and scheduling: A review and research trends.
- Author
-
Sedghi, Mahdieh, Kauppila, Osmo, Bergquist, Bjarne, Vanhatalo, Erik, and Kulahci, Murat
- Subjects
- *
TRAIN schedules , *RAILROADS , *SCHEDULING , *TAXONOMY - Abstract
• Developing a novel taxonomy for railway track maintenance planning and scheduling (RTMP&S) decision-making models. • Discussing the differences in planning and scheduling problems in railway maintenance. • Considering the structural characteristics of the railway track that can affect decision-making models. • Reviewing the attributes of maintenance management decisions in RTMP&S decision-making. • Summarising the optimisation frameworks for modelling the RTMP&S problems and the proposed solution approaches in the literature. • Discussing research trends can help researchers and practitioners to have a clear understanding of the state of the art of RTMP&S problems and future research directions. Railway track maintenance and renewal are vital for railway safety, train punctuality, and travel comfort. Therefore, having cost-effective maintenance is critical in managing railway infrastructure assets. There has been a considerable amount of research performed on mathematical and decision support models for improving the application of railway track maintenance planning and scheduling. This article reviews the literature in decision support models for railway track maintenance planning and scheduling and transforms the results into a problem taxonomy. Furthermore, the article discusses current approaches in optimising maintenance planning and scheduling, research trends, and possible gaps in the related decision-making models. [Display omitted] [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
245. Cost-sensitive learning classification strategy for predicting product failures.
- Author
-
Frumosu, Flavia Dalia, Khan, Abdul Rauf, Schiøler, Henrik, Kulahci, Murat, Zaki, Mohamed, and Westermann-Rasmussen, Peter
- Subjects
- *
PRODUCT failure , *LEARNING strategies , *VORONOI polygons , *CONTAINER industry , *INDUSTRIAL costs , *FEATURE selection - Abstract
• Modified cost-sensitive classification strategy for an industrial problem. • Decision rule for going through or skipping elements of last quality control stage. • Trade-off problem between cost and quality. • Modified strategy can also be applied on problems without an associated cost. In the current era of Industry 4.0, sensor data used in connection with machine learning algorithms can help manufacturing industries to reduce costs and to predict failures in advance. This paper addresses a binary classification problem found in manufacturing engineering, which focuses on how to ensure product quality delivery and at the same time to reduce production costs. The aim behind this problem is to predict the number of faulty products, which in this case is extremely low. As a result of this characteristic, the problem is reduced to an imbalanced binary classification problem. The authors contribute to imbalanced classification research in three important ways. First, the industrial application coming from the electronic manufacturing industry is presented in detail, along with its data and modelling challenges. Second, a modified cost-sensitive classification strategy based on a combination of Voronoi diagrams and genetic algorithm is applied to tackle this problem and is compared to several base classifiers. The results obtained are promising for this specific application. Third, in order to evaluate the flexibility of the strategy, and to demonstrate its wide range of applicability, 25 real-world data sets are selected from the KEEL repository with different imbalance ratios and number of features. The strategy, in this case implemented without a predefined cost, is compared with the same base classifiers as those used for the industrial problem. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
246. Artificial Intelligence in Pharmacoepidemiology: A Systematic Review. Part 1-Overview of Knowledge Discovery Techniques in Artificial Intelligence.
- Author
-
Sessa M, Khan AR, Liang D, Andersen M, and Kulahci M
- Abstract
Aim: To perform a systematic review on the application of artificial intelligence (AI) based knowledge discovery techniques in pharmacoepidemiology., Study Eligibility Criteria: Clinical trials, meta-analyses, narrative/systematic review, and observational studies using (or mentioning articles using) artificial intelligence techniques were eligible. Articles without a full text available in the English language were excluded., Data Sources: Articles recorded from 1950/01/01 to 2019/05/06 in Ovid MEDLINE were screened., Participants: Studies including humans (real or simulated) exposed to a drug., Results: In total, 72 original articles and 5 reviews were identified via Ovid MEDLINE. Twenty different knowledge discovery methods were identified, mainly from the area of machine learning (66/72; 91.7%). Classification/regression (44/72; 61.1%), classification/regression + model optimization (13/72; 18.0%), and classification/regression + features selection (12/72; 16.7%) were the three most frequent tasks in reviewed literature that machine learning methods has been applied to solve. The top three used techniques were artificial neural networks, random forest, and support vector machines models., Conclusions: The use of knowledge discovery techniques of artificial intelligence techniques has increased exponentially over the years covering numerous sub-topics of pharmacoepidemiology., Systematic Review Registration: Systematic review registration number in PROSPERO: CRD42019136552., (Copyright © 2020 Sessa, Khan, Liang, Andersen and Kulahci.)
- Published
- 2020
- Full Text
- View/download PDF
247. Quantifying the sources of uncertainty when calculating the limiting flux in secondary settling tanks using iCFD.
- Author
-
Guyonvarch E, Ramin E, Kulahci M, and Plósz BG
- Subjects
- Models, Theoretical, Motor Vehicles, Sewage, Uncertainty, Hydrodynamics, Waste Disposal, Fluid
- Abstract
Solids-flux theory (SFT) and state-point analysis (SPA) are used for the design, operation and control of secondary settling tanks (SSTs). The objectives of this study were to assess uncertainties, propagating from flow and solids loading boundary conditions as well as compression settling behaviour to the calculation of the limiting flux (J
L ) and the limiting solids concentration (XL ). The interpreted computational fluid dynamics (iCFD) simulation model was used to predict one-dimensional local concentrations and limiting solids fluxes as a function of loading and design boundary conditions. A two-level fractional factorial design of experiments was used to infer the relative significance of factors unaccounted for in conventional SPA. To move away from using semi-arbitrary safety factors, a systematic approach was proposed to calculate the maximum SST capacity by employing a factor of 23% and a regression meta-model to correct values of JL and XL , respectively - critical for abating hydraulic effects under wet-weather flow conditions.- Published
- 2020
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.