270 results
Search Results
2. A Bibliography of Process Capability Papers.
- Author
-
Spiring, Fred, Leung, Bartholomew, Cheng, Smiley, and Yeung, Anthony
- Subjects
- *
BIBLIOGRAPHY , *INFORMATION resources , *ENGINEERING , *INDUSTRIAL arts , *TECHNOLOGY - Abstract
Presents a bibliography of process capability books, manuals and articles, in relation to the engineering sector. "Process Capability Indices in Theory and Practice," by C. Lovelace and S. Kotz; "Measuring Process Capability," by D. R. Bothe; "Process Capability Indices," by N. L. Johnson and S. Kotz.
- Published
- 2003
- Full Text
- View/download PDF
3. WEIBULL PROBABILITY PAPER USED ON HIGHLY STRESSED BIMODAL COMPONENTS.
- Author
-
Drapella, Antoni
- Subjects
- *
WEIBULL distribution , *SYSTEM failures , *DISTRIBUTION (Probability theory) , *RELIABILITY in engineering , *ENGINEERING , *SYSTEMS engineering - Abstract
Under high stress conditions the `freak' and `strong' subpopulations lie very close to each other on the time scale. It is easy to `read' such a time-to-failure distribution as unimodal. especially when probability paper is used. This paper puts forward Parzen's estimator of the probability density function as a very useful method of indicating the `freak' subpopulation. [ABSTRACT FROM AUTHOR]
- Published
- 1985
- Full Text
- View/download PDF
4. Optimized sampling design and rationale for verification and validation.
- Author
-
Cheng, S., Kupfer, K., Dixon, M., and Shammas, S.
- Subjects
CONFIRMATION (Logic) ,EXPERT computer system validation ,SAMPLING (Process) ,ENGINEERING ,CURVE fitting ,SAMPLE size (Statistics) ,PRODUCT quality - Abstract
An optimized sampling design that meets customer, design, or process requirements, while balancing technology limitations, is still a common challenge to engineering communities. This is especially true in the medical device industry. Acceptance sampling plans for manufacturing are widely available, but the appropriate sampling plans for verification and validation (V&V) are less well known. This paper applies established statistical theory to derive sampling plans appropriate for estimating product reliability during V&V, where reliability must exceed an established threshold with an appropriate margin of statistical confidence. The paper provides insight on how to estimate parameters of interest and interpret acceptance criteria. Operating characteristic curves are used to examine if a design or process is capable of producing future product that meets design specifications and/or customer requirements in terms of confidence and reliability. The methodology is applied to both attribute and variable sampling plans, including examples showing how to achieve a high probability of passing the acceptance criteria. Formulas, sample size tables, and operating characteristic curves are provided for engineering practitioners to use. The paper aims at providing a practical quantitative approach and a valid statistical rationale to assess overall product quality during V&V. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
5. FOREWORD.
- Author
-
Pollino, Emiliano and Fantini, Fausto
- Subjects
SEMICONDUCTORS ,RELIABILITY in engineering ,ENGINEERING ,QUALITY control ,QUALITY ,QUALITY assurance ,PERIODICALS - Abstract
This July 1, 1991 issue of the journal "Quality and Reliability Engineering International" is devoted to the reliability physics of semiconductor devices. It is quite surprising to notice usually such a low percentage of papers dealing with this subject published in the scientific literature compared with the number of papers on statistics or, lately, on total quality management and related issues. The explanation for this situation is manifold. First of all an economic issue: to establish a failure analysis laboratory where a thorough investigation can be made of the failure mechanisms of today's devices can be quite expensive, and therefore unaffordable to medium and small industries and academic institutions as well.
- Published
- 1991
- Full Text
- View/download PDF
6. Assessing the maintenance in a process using a semi-parametric approach.
- Author
-
Ansell, Jake, Archibald, Tom, Dagpunar, John, Thomas, Lyn, Abell, Peter, and Duncalf, David
- Subjects
MAINTENANCE ,MAINTAINABILITY (Engineering) ,SYSTEM downtime ,ENGINEERING ,SERVICE life - Abstract
In developing any successful maintenance strategy it is important to be able to estimate the rate of occurrence of maintenance events. In a previous paper the authors have argued that there are advantages to using a non-parametric or semi-parametric approach to such analysis. In this paper the authors explore the use of the approach to data from the water industry using a case study. The approach provides insight into the context revealing the impact of maintenance on the process. The effect of the maintenance strategy can be evaluated. The paper explores the practical issues of missing values and implementation of an appropriate maintenance strategy. Copyright © 2001 John Wiley & Sons, Ltd. [ABSTRACT FROM AUTHOR]
- Published
- 2001
- Full Text
- View/download PDF
7. Fourth International Conference on Quality and Reliability (ICQR2005).
- Author
-
Yuan Lu and Min Xie
- Subjects
SCHOLARLY periodicals ,QUALITY control ,RELIABILITY in engineering ,ENGINEERING ,CONFERENCES & conventions - Abstract
The article introduces the special issue of "Quality and Reliability Engineering International." Selected papers presented at the Fourth International Conference on Quality and Reliability are included in the issue. The conference's aim was to provide a forum for academics and practitioners that would enable them to share their research application results in quality and reliability engineering.
- Published
- 2007
- Full Text
- View/download PDF
8. Guidelines for corrective replacement based on low stochastic structure assumptions.
- Author
-
Coolen, F. P. A. and Newby, M. J.
- Subjects
MAINTENANCE ,MAINTAINABILITY (Engineering) ,ENGINEERING ,MANUFACTURING processes ,STOCHASTIC systems - Abstract
This paper presents corrective replacement decisions, e.g. for machines in a production process or other technical systems. In an attempt to base decisions on observed failure times only, some guidelines are provided for replacing failed machines. The method does not provide an optimal strategy in all situations, indicating that sometimes more information or assumptions are needed. The optimal policy indicates how to act if the low assumptions model recommends action. If the model does not strongly indicate an action, more data need to be collected or more sophisticated modelling is needed. Further modelling would require additional assumptions or input from expert judgements, and could be an expensive exercise. A method that gives clear guidelines if the data are strongly indicative may save time and money. This paper presents the model in an elementary form and is intended as a first step towards modelling more realistic maintenance situations. © 1997 John Wiley & Sons, Ltd. [ABSTRACT FROM AUTHOR]
- Published
- 1997
- Full Text
- View/download PDF
9. Data Mining-A Special Issue of Quality and Reliability Engineering International ( QREI).
- Author
-
Li, Jing and Kulahci, Murat
- Subjects
DATA mining ,ENGINEERING ,BIOINFORMATICS - Abstract
The article calls for the submission of papers on data mining technologies and their applications in various domains sucha as engineering, health care, bioinformatics, social sciences and finance.
- Published
- 2013
- Full Text
- View/download PDF
10. A Study of Reliability of Multi-State Systems with Two Performance Sharing Groups.
- Author
-
Peng, Rui, Liu, Hanlin, and Xie, Min
- Subjects
ENGINEERING ,ELECTRIC power distribution ,DATA transmission systems ,DIGITAL communications ,ELECTRIC power - Abstract
The performance sharing can be widely seen in different kinds of engineering systems, such as meshed power distribution systems and interconnected data transmission systems. This paper presents a study of systems consisting of multi-state units connected as two performance sharing groups, and the suggested methodology can be adapted for the case of three or more performance sharing groups. To be more general, the system unit is allowed to be in one single performance sharing group or both. Each unit has a random demand to satisfy, and the units can transmit capacity with each other given that the total performance transmitted in each performance sharing group does not surpass its maximum transmission capacity. An algorithm based on the universal generating function technique is proposed to evaluate the system reliability and the expected system performance deficiency. Copyright © 2016 John Wiley & Sons, Ltd. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
11. The Hirsch index in manufacturing and Quality engineering.
- Author
-
Franceschini, Fiorenzo and Maisano, Domenico A.
- Subjects
H-index (Citation analysis) ,MANUFACTURING processes ,RELIABILITY in engineering ,QUALITY control ,BIBLIOMETRICS ,ENGINEERING - Abstract
The Hirsch index (h) is a recent bibliometric indicator for assessing the research output of scientists. Its most remarkable characteristics are immediate intuitive meaning, effective synthesis and easy calculation. With few modifications, the use of this indicator can be profitably extended to other fields beyond bibliometrics. The main novelty of the paper is to suggest some potential applications in manufacturing and Quality engineering, focussing the attention on the h capacity to aggregate and synthesize the most commonly used metrics in these areas. Copyright © 2009 John Wiley & Sons, Ltd. [ABSTRACT FROM AUTHOR]
- Published
- 2009
- Full Text
- View/download PDF
12. First European Symposium on Reliability of Electron Devices, Failure Physics and Analysis, ESREF. Tecnopolis (Bari), Italy 2-5 October 1990.
- Subjects
CONFERENCES & conventions ,RELIABILITY in engineering ,ELECTRONS ,ELECTRONICS ,PRESSURE groups ,ENGINEERING - Abstract
The First European Symposium on Reliability of Electron Devices, Failure Physics and Analysis, was organized with the aim of offering an annual forum to the European researchers in the field of electron device reliability, where they can meet and discuss their common problems. The 1990 Conference was first planned as a meeting of Italian researchers engaged in the five-years project Materials and Devices for Solid-State Electronics (MADESS), of the National Research Council (CNR), but the interest of many European researchers, that met at the Technical Interest Group on IC Reliability under the CEC, and the encouragement of the CEC were the driving force to widen its purpose and begin a series of European conferences.
- Published
- 1991
13. An investigation of ‘cannot duplicate’ failures.
- Author
-
Williams, R., Banner, J., Knowles, I., Dube, M., Natishan, M., and Pecht, M.
- Subjects
TESTING ,RELIABILITY in engineering ,AUTOMATIC test equipment ,ENGINEERING ,ENVIRONMENTAL engineering - Abstract
Various terms such as ‘cannot duplicate (CND)’, ‘re-test OK (RTOK)’, ‘no fault indicated (NFI)’, ‘no fault found (NFF)’, and ‘no trouble found (NTF)’, are used to describe the inability to replicate field failures during laboratory assessment. This paper uses CND to refer to all such failures. CND failures can make up more than 85% of all observed field failures in avionics and account for more than 90% of all maintenance costs. These statistics can be attributed to a limited understanding of root cause failure characteristics of complex systems, inappropriate means of diagnosing the condition of the system, and the inability to duplicate the field conditions in the laboratory. This paper addresses CND issues with reference to research carried out on samples of an electronics board used as the seat-back processor modules on board the Boeing 777. The boards were monitored continuously using existing on-board comprehensive built-in test equipment. It was found that the hot temperature operating limits of the board decreased by up to 70°C during highly accelerated environmental stress. Furthermore, improperly seated connectors were found to result in spurious component failure reports from the built-in test equipment. This paper suggests that the observed drift in operating limit and connector issues are two likely root causes of CND failures and makes recommendations for addressing them. © Crown Copyright 1998. Reproduced with the permission of the Controller of Her Majesty's Stationery Office. [ABSTRACT FROM AUTHOR]
- Published
- 1998
- Full Text
- View/download PDF
14. Diagnosing the health of engineering systems.
- Author
-
Billinton, Roy, Aboreshaid, Saleh, and Fotuhi-Firuzabad, Mahmud
- Subjects
RELIABILITY in engineering ,PROBABILITY theory ,ENGINEERING systems ,ENGINEERING ,SYSTEMS engineering - Abstract
System reliability evaluation can be performed using the two broad categories of deterministic and probabilistic techniques. The basic weakness associated with deterministic methods is that they do not specifically take into account the likelihood of component failures in the evaluation. There is, however, considerable reluctance to using probabilistic techniques in many engineering areas owing to the difficulty in interpreting the resulting numerical indices. An approach is presented in this paper to alleviate this difficulty by combining deterministic considerations with probabilistic indices to monitor the system well-being in the form of system health and margin states in addition to a conventional risk index. The concept of system well-being is illustrated using relatively simple network configurations. The proposed technique is also applied to a power system network to determine the degree of well-being of the various load points and the overall system. © 1997 John Wiley & Sons, Ltd. [ABSTRACT FROM AUTHOR]
- Published
- 1997
- Full Text
- View/download PDF
15. A modelling procedure to optimize component safety inspection over a finite time horizon.
- Author
-
Wang, W. and Christer, A. H.
- Subjects
SAFETY ,INDUSTRIAL safety ,MAINTENANCE ,BUILDING inspection ,ENGINEERING ,RENEWAL theory - Abstract
In this paper a model of a safety inspection process is proposed for the expected consequence of inspections over a finite time horizon. A single dominant failure mode is modelled, which has considerable safety or risk consequences assumed measurable either in cost terms or in terms of the probability of failure over the time horizon. The model established extends earlier work assuming an infinite time horizon, and uses the concept of delay time and asymptotic results from the theory of renewal and renewal reward processes. The paper establishes a pragmatic procedure for formulating objective functions which may be optimized to determine the optimal inspection intervals. Merits of both the exact and asymptotic formulations of these objective functions for possible use in the inspection optimization process are considered. Although the procedure for developing an objective function over a finite time zone inspection process assumes perfect inspection, it can be generalized to the imperfect inspection case. Because of the intractability of the mathematics, it is suggested that when optimizing an inspection process over a finite time zone, an asymptotic formulation of the objective function should be optimized, and this solution then checked and if necessary refined, using simulation calculation. A numerical example illustrates the performance of the basic periodic inspection policy over different time horizons using the asymptotic solution. The results are compared with simulations performed to estimate the exact expected cost measure. It is shown that the simpler asymptotic solution is satisfactory in the case considered, especially when the time horizon is relatively long. © 1997 John Wiley & Sons, Ltd. [ABSTRACT FROM AUTHOR]
- Published
- 1997
- Full Text
- View/download PDF
16. IMPROVING AUTOMOTIVE DIMENSIONAL QUALITY BY USING PRINCIPAL COMPONENT ANALYSIS.
- Author
-
Yang, Km
- Subjects
ENGINEERING ,TECHNOLOGY ,METHODOLOGY ,QUALITY ,CASE studies ,RESEARCH - Abstract
Dimensional quality is a measure of conformance of the actual geometry of products with the designed geometry. In the automotive body assembly process. maintaining good dimensional quality is very difficult and critical to the product. In this paper, a dimensional quality analysis and diagnostic tool is developed based on principal component analysis (RCA). In quality analysis, the quality loss due to dimensional variation can be partitioned into a mean deviation and piece-to-piece variation. By using RCA. the piece-to-piece variation can be further decomposed into a set of independent geometrical variation modes. The features of these major variation modes help in identifying the underlying causes of dimensional variation in order to reduce the variation. The variation mode chart developed in this paper provides the explicit and exact geometrical interpretation of variation modest making RCA easily understood. A case study using an automotive body assembly dimensional quality analysis will illustrate the value and power of this methodology in solving actual engineering problems in a practical manner. [ABSTRACT FROM AUTHOR]
- Published
- 1996
- Full Text
- View/download PDF
17. ON PARAMETER DESIGN OPTIMIZATION PROCEDURES.
- Author
-
Bong-Jin Yum and Sun-Woo Ko, C. Julius
- Subjects
MULTIDISCIPLINARY design optimization ,ENGINEERING design ,COMBINATORIAL optimization ,TAGUCHI methods ,QUALITY control ,ENGINEERING - Abstract
The Taguchi idea of parameter design has received considerable attention from statisticians and engineers. However, it is only recently that some of the underlying principles of his ideas have been clarified and other alternative procedures have been proposed. This paper reviews the currently available procedures for parameter design in terms of minimizing expected loss. Advantages and disadvantages of each procedure are discussed for the three types of performance characteristics, and guidelines are provided for the choice of an appropriate procedure for a given parameter design problem. [ABSTRACT FROM AUTHOR]
- Published
- 1991
- Full Text
- View/download PDF
18. USING PROPORTIONAL HAZARD MODELLING IN PLANT MAINTENANCE.
- Author
-
Love, Charles E. and Guo, R.
- Subjects
PLANT maintenance ,PLANT engineering ,BUSINESS enterprises ,MAINTENANCE ,FACILITY management ,ENGINEERING - Abstract
The purpose of this paper is to develop a framework for the application of proportional hazard modelling to plant maintenance. Two regimes are investigated; a good-as-new regime where the hazard rate of the system is refreshed by either failure or preventive maintenance action, and a bad-as-old regime where only preventive maintenance action refreshes the hazard rate. The output of the analysis is the recommendation of optimal preventative maintenance plans under both of these regimes. Data for a local firm are used to illustrate the models. The inclusion of proportional hazard modelling is shown to yield improved maintenance plans in both regimes. A proposal for an adaptive scheme is made such that the maintenance plan can be adjusted as changing plant conditions warrant it. [ABSTRACT FROM AUTHOR]
- Published
- 1991
- Full Text
- View/download PDF
19. INCREASING IMPORTANCE OF EFFECTS OF MARGINAL PARTS ON RELIABILITY.
- Author
-
Ganter, William A.
- Subjects
RELIABILITY in engineering ,TESTING ,ENGINEERING ,MANUFACTURING defects ,ERRORS ,PRODUCT quality - Abstract
In this paper, marginal parts are equated with low quality and low reliability. Marginal parts can be shown to cause errors in some products during tests. They are also a cause of field failures in these products. Although marginal parts causes still have a random failure time component, they have a much lower amount of variation than our traditional failure causes, hidden flaws. I give marginal parts a measurable definition. If marginal effects can be established for a product, then this knowledge can be used to improve reliability. Some examples of products where I believe this marginal effect holds are discussed in this paper. Such marginal effects on reliability are gaining more and more importance in systems that are increasing in complexity. A strong point in applying the marginal parts theory framed in this paper is that it can be readily subjected to statistical testing to see if it holds or not for any particular product. [ABSTRACT FROM AUTHOR]
- Published
- 1990
- Full Text
- View/download PDF
20. WHAT IS WRONG WITH THE EXISTING RELIABILITY PREDICTION METHODS?
- Author
-
Wong, Kam L.
- Subjects
RELIABILITY in engineering ,ENVIRONMENTAL engineering ,AGING ,ELECTRONICS ,ENGINEERING ,FORECASTING - Abstract
Inaccurate reliability predictions could lead to disasters such as in the case of the U.S. Space Shuttle failure. The question is: 'what is wrong with the existing reliability prediction methods?' This paper examines the methods for predicting reliability of electronics. Based on information in the literature the measured vs predicted reliability could be as far apart as five to twenty times. Reliability calculated using the five most commonly used handbooks showed that there could be a 100 times variation. The root cause for the prediction inaccuracy is that many of the first-order effect factors are not explicitly included in the prediction methods. These factors include thermal cycling, temperature change rate, mechanical shock, vibration, power on/off, supplier quality difference, reliability improvement with respect to calendar years and ageing. As indicated in the data provided in this paper any one of these factors neglected could cause a variation in the predicted reliability by several times. The reliability vs ageing-hour curve showed that there was a 10 times change in reliability from 1000 ageing-hours to 10, 000 ageing-hours. Therefore, in order to increase the accuracy of reliability prediction the factors must be incorporated into the prediction methods. [ABSTRACT FROM AUTHOR]
- Published
- 1990
- Full Text
- View/download PDF
21. Case studies.
- Author
-
Brombacher, Aarnout
- Subjects
CASE studies ,ENGINEERING periodicals ,ENGINEERING ,ISO 9000 Series Standards ,QUALITY control standards - Abstract
The article discusses the publication of case studies in the journal "Quality and Reliability Engineering International." These case studies are considered as a very valuable contribution to the journal. The publication of many case studies on quality management systems such as ISO9000 during the 1990s is recalled. Also provided are instructions for authors who want to submit case studies.
- Published
- 2015
- Full Text
- View/download PDF
22. Using Systemability Function for Periodic Replacement Policy in Real Environments.
- Author
-
Sgarbossa, Fabio, Persona, Alessandro, and Pham, Hoang
- Subjects
RELIABILITY in engineering ,ENGINEERING ,MAINTENANCE ,MAINTAINABILITY (Engineering) ,ENVIRONMENTAL impact analysis - Abstract
In the last decades, various maintenance policies have been developed and widely used, including, but not limited to replacement policy and preventive maintenance policy, depending on an accurate estimation of the reliability and the failure intensity functions. Many studies, yet, haven't considered the environmental factors (EFs) and their effects on the survival distribution of operating units. This paper is a following up with our recent research about environmental impacts on preventive maintenance by investigating the periodic replacement policy using a systemability approach. The differences between the classical maintenance approach and the systemability approach have been investigated and applied to a real industrial setting to evaluate the importance and the relevance of taking into account EFs in the implementation of one maintenance policy versus another. Copyright © 2014 John Wiley & Sons, Ltd. Highlights This research investigates the impacts of different operational environments in periodic replacement policy, using a new concept called systemability., New mathematical formulations for periodic replacement policy have been introduced and applied to real case studies., Comparison between classical approach and new approach allows to validate the use of systemability and to quantify the impact of operational conditions on total cost. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
23. Exact two‐sided statistical tolerance limits for sample variances.
- Author
-
Sarmiento, Martin G. C., Epprecht, Eugenio K., and Chakraborti, S.
- Subjects
- *
PRODUCT quality , *DECISION making , *POPULATION , *ENGINEERING , *BIOCHEMISTRY - Abstract
Abstract: Statistical tolerance intervals are widely used in the industry and in various areas of sciences, especially in conformity assessment and acceptance of products or processes in terms of quality. When the interest is in precision, a tolerance interval for the variance is useful. In this paper, we consider two‐sided tolerance intervals for the population of sample variances for data that arise from a normal distribution. These intervals are useful in applications where one needs information about process deterioration as well as process improvement, to properly assess product quality. In this paper, the theory for these tolerance intervals is developed and tables for the tolerance factors, required to calculate the proposed tolerance limits, are provided for various settings. Construction and implementation of the proposed tolerance intervals are illustrated using a dataset from a real application. Summary and conclusions are offered. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
24. Applications of design of experiments in engineering.
- Author
-
Montgomery, Douglas C.
- Subjects
EDITORIALS ,EXPERIMENTAL design ,ENGINEERING ,SCIENCE ,EXPERIMENTS - Abstract
The author offers his views on the article published in previous issue which provided a survey and summary of papers employing designed experiments in engineering and science between 2001 and 2005. He expresses concern that most engineers have little exposure to design of experiments (DOX) in their undergraduate academic training. He found that the authors failed to discuss in much detail computer experiments.
- Published
- 2008
- Full Text
- View/download PDF
25. Statistical Methods in Kansei Engineering: a Case of Statistical Engineering.
- Author
-
Marco-Almagro, Lluís and Tort-Martorell, Xavier
- Subjects
ENGINEERING ,PRODUCT design ,CUSTOMER services ,EMOTIONS ,EXPERIMENTAL design - Abstract
Kansei engineering (KE) is a methodology used to incorporate emotions in products and services design. Its basic purpose is discovering in which way some properties of a product or a service convey certain emotions in its users. Data are typically collected using questionnaires. KE studies follow a model with three main steps: (i) defining the elicited emotions (semantic space); (ii) deciding on the factors that might affect the responses (space of properties); and (iii) modeling how each factor affects each response (synthesis phase). The procedure resembles that of an experimental design in an industrial context. However, practitioners of KE are hardly ever statisticians. Statistical techniques in KE are sometimes misused, and the discipline could benefit from a more extensive use of statistical methods. KE is thus a good area of application of statistical engineering: focusing not in advancement of statistics but on how current techniques can be best used in a new area. The aim of this paper is twofold: (i) to present the fundamentals of KE while giving an easy to understand example to illustrate the procedure; and (ii) to explain why KE is a good example of statistical engineering by proposing improvements that emanate from the adequate use of statistical techniques. Copyright © 2012 John Wiley & Sons, Ltd. [ABSTRACT FROM AUTHOR]
- Published
- 2012
- Full Text
- View/download PDF
26. Optimal designs of the multivariate synthetic chart for monitoring the process mean vector based on median run length.
- Author
-
Khoo, Michael B. C., Wong, V. H., Wu, Zhang, and Castagliola, Philippe
- Subjects
OPTIMAL designs (Statistics) ,QUALITY control charts ,MULTIVARIATE analysis ,GRAPHIC methods ,ENGINEERING - Abstract
The average run length (ARL) is usually used as a sole measure of performance of a multivariate control chart. The Hotelling's T
2 , multivariate exponentially weighted moving average (MEWMA) and multivariate cumulative sum (MCUSUM) charts are commonly optimally designed based on the ARL. Similar to the case of univariate quality control, in multivariate quality control, the shape of the run length distribution changes in accordance to the magnitude of the shift in the mean vector, from highly skewed when the process is in-control to nearly symmetric for large shifts. Because the shape of the run length distribution changes with the magnitude of the shift in the mean vector, the median run length (MRL) provides additional and more meaningful information about the in-control and out-of-control performances of multivariate charts, not given by the ARL. This paper provides a procedure for optimal designs of the multivariate synthetic T2 chart for the process mean, based on MRL, for both the zero and steady-state modes. Two Mathematica programs, each for the zero state and steady-state modes are given for a quick computation of the optimal parameters of the synthetic T2 chart, designed based on MRL. These optimal parameters are provided in the paper, for the bivariate case with sample sizes, nin{4, 7, 10}. The MRL performances of the synthetic T2 , MEWMA and Hotelling's T2 charts are also compared. Copyright © 2011 John Wiley & Sons, Ltd. [ABSTRACT FROM AUTHOR]- Published
- 2011
- Full Text
- View/download PDF
27. A sequential constant-stress accelerated life testing scheme and its Bayesian inference.
- Author
-
Xiao Liu and Loon-Ching Tang
- Subjects
ACCELERATED life testing ,BAYESIAN field theory ,DATA analysis ,STATISTICS ,WEIBULL distribution ,ENGINEERING - Abstract
In the analysis of accelerated life testing (ALT) data, some stress-life model is typically used to relate results obtained at stressed conditions to those at use condition. For example, the Arrhenius model has been widely used for accelerated testing involving high temperature. Motivated by the fact that some prior knowledge of particular model parameters is usually available, this paper proposes a sequential constant-stress ALT scheme and its Bayesian inference. Under this scheme, test at the highest stress is firstly conducted to quickly generate failures. Then, using the proposed Bayesian inference method, information obtained at the highest stress is used to construct prior distributions for data analysis at lower stress levels. In this paper, two frameworks of the Bayesian inference method are presented, namely, the all-at-one prior distribution construction and the full sequential prior distribution construction. Assuming Weibull failure times, we (1) derive the closed-form expression for estimating the smallest extreme value location parameter at each stress level, (2) compare the performance of the proposed Bayesian inference with that of MLE by simulations, and (3) assess the risk of including empirical engineering knowledge into ALT data analysis under the proposed framework. Step-by-step illustrations of both frameworks are presented using a real-life ALT data set. Copyright © 2008 John Wiley & Sons, Ltd. [ABSTRACT FROM AUTHOR]
- Published
- 2009
- Full Text
- View/download PDF
28. 15th ARTS Advances in Reliability Technology Symposium.
- Author
-
Andrews, John
- Subjects
CONFERENCES & conventions ,RELIABILITY in engineering ,ENGINEERING - Abstract
Focuses on the 15th Advances in Reliability Technology Symposium (ARTS) at Loughborough University, England on April 8-10, 2003. Objectives of the ARTS series; Number of papers presented at the symposium.
- Published
- 2004
- Full Text
- View/download PDF
29. Principles of robust design methodology.
- Author
-
Arvidsson, Martin and Gremyr, Ida
- Subjects
DESIGN ,PRODUCT management ,ENGINEERING ,PRODUCT design ,TAGUCHI methods ,ROBUST control - Abstract
The literature on robust design has focused chiefly on the development of methods for identifying robust design solutions. In this paper we present a literature review of conflicts and agreements on the principles of robust design. Through this review four central principles of robust design are identified: awareness of variation, insensitivity to noise factors, application of various methods, and application in all stages of a design process. These principles are comprised into the following definition of robust design methodology: Robust design methodology means systematic efforts to achieve insensitivity to noise factors. These efforts are founded on an awareness of variation and can be applied in all stages of product design. Copyright © 2007 John Wiley & Sons, Ltd. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
30. Application of the Mixed-field-environments concept in lifetime prediction of some ceramic components.
- Author
-
Salmela, Olli
- Subjects
FINITE element method ,NUMERICAL analysis ,RELIABILITY in engineering ,ENGINEERING ,PROBABILITY theory - Abstract
In lifetime prediction, the effect of a certain use environment on the lifetime is taken into account by using an acceleration factor that is specific to that use environment. The fact that the product may be used in different countries with differing environmental conditions results in a set of acceleration factor values. Prior to this paper, the effect of multiple field environments has been taken into account by using an average value of the acceleration factors obtained for several use environments. As the magnitudes of the acceleration factors (each specific to a certain use environment) vary a lot, the use of the average value has resulted in unrealistic lifetime estimates. The unrealism related to the use of the average value can be avoided when using the mixture-of-distributions concept, since the true acceleration factor values, instead of an average value, can then be used. In this paper, the mixture-of-distributions concept is applied for the first time to evaluate the effect of multiple use environments (thermal cycling) on the lifetime of a component population. By using this concept, it is possible to evaluate all of the key figures of reliability for the whole population based on the fractions of the component population that are used in multiple, different use environments. This approach can be applied when allocating maintenance and spare parts for a product that is used worldwide. The mixture-of-distributions concept in lifetime prediction is demonstrated by analyzing the test results of some ceramic leadless chip carrier components. Acceleration factors in four alternative field environments are estimated by running thermo-mechanical finite element analysis simulations. The lifetime performance of the whole component population used in certain alternative field environments is then evaluated by applying the mixture-of-distributions concept. Copyright © 2006 John Wiley & Sons, Ltd. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
31. Protection Survivability Importance in Systems with Multilevel Protection.
- Author
-
Levitin, Gregory
- Subjects
ALGORITHMS ,COMPUTER algorithms ,COMPUTER programming ,ENGINEERING systems ,ENGINEERING - Abstract
This paper considers systems that can have different states corresponding to different combinations of available elements constituting the system. Each state can be characterized by a performance rate, which is the quantitative measure of a system's ability to perform its task. Both the impact of external factors (attack) and internal causes (failures) affect system survivability, which is determined as the probability of meeting a given demand. In order to increase the system's survivability a multilevel protection is applied to its subsystems. This means that a subsystem and its inner level of protection are in turn protected by the protection of an outer level. This double-protected subsystem has its outer protection and so forth. In such systems, the protected subsystems can be destroyed only if all of the levels of their protection are destroyed. Each level of protection can be destroyed only if all of the outer levels of protection are destroyed. In such systems, different protections play different roles in providing for the system's survivability. The evaluation of the relative influence of the protections' survivability on the survivability of the entire system provides useful information about the importance of these protections. The protection survivability importance index is introduced in order to evaluate this influence and an algorithm for evaluating the index is presented. The relevancy of protection is also considered. Illustrative examples are presented. Copyright © 2004 John Wiley & Sons, Ltd. [ABSTRACT FROM AUTHOR]
- Published
- 2004
- Full Text
- View/download PDF
32. Refined Rank Regression Method with Censors.
- Author
-
Wang, Wendai
- Subjects
REGRESSION analysis ,ENGINEERING ,COMPUTER software ,MATHEMATICAL statistics ,STATISTICS - Abstract
Reliability engineers often face failure data with suspensions. The rank regression method with an approach introduced by Johnson has been commonly used to handle data with suspensions in engineering practice and commercial software. However, the Johnson method makes partial use of suspension information only—the positions of suspensions, not the exact times to suspensions. A new approach for rank regression with censored data is proposed in this paper, which makes full use of suspension information. Taking advantage of the parametric approach, the refined rank regression obtains the ‘exact’ mean order number for each failure point in the sample. With the ‘exact’ mean order number, the proposed method gives the ‘best’ fit to sample data for an assumed times-to-failure distribution. This refined rank regression is simple to implement and appears to have good statistical and convergence properties. An example is provided to illustrate the proposed method. Copyright © 2004 John Wiley & Sons, Ltd. [ABSTRACT FROM AUTHOR]
- Published
- 2004
- Full Text
- View/download PDF
33. Conditional Lifetime Data Analysis Using the Limited Expected Value Function.
- Author
-
Quigley, John and Walls, Lesley
- Subjects
DATA analysis ,FAILURE analysis ,QUALITY control ,STATISTICAL process control ,SYSTEMS engineering ,ENGINEERING - Abstract
Much failure, and other event, data are commonly highly censored. Consequently this limits the efficacy of many statistical analysis techniques. The limited expected value (LEV) function presents an alternative way of characterizing lifetime distributions. In essence the LEV provides a means of calculating a truncated mean time to failure (MTTF) (or mean time before failure (MTBF) if appropriate) that is adjusted at each of the censoring times and so appears potentially suitable for dealing with censored data structures. In theory, the LEV has been defined for many standard distributions, however its practical use is not well developed. This paper aims to extend the theory of LEV for typical censoring structures to develop procedures that will assist in model identification as well as parameter estimation. Applications to typical event data will be presented and the use of LEV in comparison with a selection of existing lifetime distributional analysis will be made based on some preliminary research. Copyright © 2004 John Wiley & Sons, Ltd. [ABSTRACT FROM AUTHOR]
- Published
- 2004
- Full Text
- View/download PDF
34. Evaluation of the Quality of a Car Braking System by a Dynamic Simulator.
- Author
-
Celentano, G., Iervolino, R., Fontana, Vincenzo, and Porreca, S.
- Subjects
ROAD machinery brakes ,MOTION control devices ,CONSUMERS ,QUALITY ,ENGINEERING - Abstract
Quality has become one of the more important factors in the purchasing decisions of the consumer. At the moment, the tools available for variability analysis in the production process, which is the main cause of quality degradation, are mostly based on statistical approaches that neglect the physical relationships that exist in a man-made system. In this paper a new general methodology is proposed, which gives evidence of how the development of a mathematical model could be more suitable if set by a relatively limited amount of data and a number of experimental tests. The implementation of the model in a dynamic simulator allows one to evaluate the effects of the variability of the most critical elements on the quality of the whole system numerically. The effectiveness of the procedure is demonstrated by the evaluation of some quality indexes of a car braking system. Copyright © 2004 John Wiley & Sons, Ltd. [ABSTRACT FROM AUTHOR]
- Published
- 2004
- Full Text
- View/download PDF
35. Modelling Feasible Design Regions Using Lattice-based Kernel Methods.
- Author
-
Bates, R. A. and Wynn, H. P.
- Subjects
KRIGING ,NONLINEAR theories ,RELIABILITY in engineering ,ENGINEERING ,LATTICE theory - Abstract
Reliability, robustness and general performance are key indicators when attempting to improve the design of existing engineering systems. The search for an improved design is conducted in the space of all possible designs, referred to as the design space. Often when searching this space, infeasible or unsafe regions are encountered, perhaps because particular combinations of parameter values are not physically possible. Such regions need to be avoided during optimization to ensure that any new design is itself feasible. In addition, when considering the reliability and robustness of a design, it may be important to know how far a feasible design is from any boundary. Indeed, it may be the case that such distances are directly related to the reliability and robustness of the design and it may, therefore, be useful to model the boundary between feasible and infeasible regions. This paper proposes the use of Hilbert bases as a method of identifying points in the design space that are close to, or on, the feasible boundary and then uses this information to model the boundary by defining positive definite kernels on new spaces such as hyperspheres. In this way it is possible to model whole surfaces as opposed to single functions. Copyright © 2004 John Wiley & Sons, Ltd. [ABSTRACT FROM AUTHOR]
- Published
- 2004
- Full Text
- View/download PDF
36. Training for design of experiments using a catapult.
- Author
-
Antony, Jiju
- Subjects
EXPERIMENTAL design ,ENGINEERING ,ENGINEERS ,INDUSTRIAL research ,STATISTICS - Abstract
Design of experiments (DOE) is a powerful approach for discovering a set of process (or design) variables which are most important to the process and then determine at what levels these variables must be kept to optimize the response (or quality characteristic) of interest. This paper presents two catapult experiments which can be easily taught to engineers and managers in organizations to train for design of experiments. The results of this experiment have been taken from a real live catapult experiment performed by a group of engineers in a company during the training program on DOE. The first experiment was conducted to separate out the key factors (or variables) from the trivial and the second experiment was carried out using the key factors to understand the nature of interactions among the key factors. The results of the experiment were analysed using simple but powerful graphical tools for rapid and easier understanding of the results to engineers with limited statistical competency. Copyright © 2002 John Wiley & Sons, Ltd. [ABSTRACT FROM AUTHOR]
- Published
- 2002
- Full Text
- View/download PDF
37. Bayesian tolerance interval control limits for attributes<FN>This article is a U.S. Government work and is in the public domain in the U.S.A. </FN>.
- Author
-
Hamada, Michael
- Subjects
ENGINEERING tolerances ,BAYESIAN analysis ,PROBABILITY theory ,ENGINEERING ,STATISTICS - Abstract
The probability content of standard control limits for attributes can vary because distribution parameters that appear in the control limits are estimated based on previous data. This paper proposes using Bayesian tolerance interval control limits which control the probability content at a specified level with a given confidence. Bayesian tolerance interval control limits are developed for np, p, c and u charts and are illustrated with four examples from the literature. Moreover, Bayesian tolerance interval control limits can be used for processes at start-up. Published in 2002 by John Wiley & Sons, Ltd. [ABSTRACT FROM AUTHOR]
- Published
- 2002
- Full Text
- View/download PDF
38. Optimal control of a deteriorating process with a quadratic loss function.
- Author
-
Al-Fawzan, M. A. and Rahim, M. A.
- Subjects
PROCESS control systems ,QUALITY control ,AUTOMATIC control systems ,RELIABILITY in engineering ,ENGINEERING - Abstract
This paper considers the optimal determination of the length of the production run and the initial setting of a process that exhibits a linear drift that can start at a random point in time. Quadratic off-target costs and time-based costs of maintenance and salvage value are included in the formulation. The model includes other models proposed in the literature as particular cases. Numerical examples are provided to illustrate the application of the proposed model. Copyright © 2001 John Wiley & Sons, Ltd. [ABSTRACT FROM AUTHOR]
- Published
- 2001
- Full Text
- View/download PDF
39. Eliciting engineering knowledge about reliability during design-lessons learnt from implementation.
- Author
-
Hodge, R., Evans, M., Marshall, J., Quigley, J., and Walls, L.
- Subjects
RELIABILITY in engineering ,ENGINEERING ,MAINTAINABILITY (Engineering) ,SYSTEMS engineering ,PROBABILITY theory ,QUALITY control - Abstract
In electronic design the use of engineering knowledge and experience is considered important in understanding and estimating the reliability performance of complex systems. There are numerous methods proposed for eliciting this knowledge in order to ensure that the data collected are valid and reliable. In this paper we describe our experiences in implementing an elicitation process that aims to extract engineering knowledge about the impact of design changes on a new aerospace product that is a variant of an existing product. The elicitation procedures used will be outlined and the ways in which we evaluated their usefulness will be described. This research generated many useful insights from the engineers and facilitators involved in the elicitation exercise. This paper shares their perspectives on the gains and losses associated with the exercise and makes recommendations for enhancing future procedures based on the lessons learnt. Copyright © 2001 John Wiley & Sons, Ltd. [ABSTRACT FROM AUTHOR]
- Published
- 2001
- Full Text
- View/download PDF
40. Bayesian belief nets for managing expert judgement and modelling reliability.
- Author
-
Sigurdsson, J. H., Wallsand, L. A., and Quigley, J. L.
- Subjects
BAYESIAN analysis ,STATISTICAL decision making ,PROBABILITY theory ,RELIABILITY in engineering ,ENGINEERING ,TESTING - Abstract
Bayesian belief nets (BBNs) provide an effective way of reasoning under uncertainty. They have a firm mathematical background in probability theory and have been used in a variety of application areas, including reliability. BBNs can provide alternative representations of fault trees and reliability block diagrams. BBNs can be used to incorporate expert judgement formally into the modelling process. It has been claimed BBNs may overcome some of the limitations of standard reliability techniques. This paper presents an overview of BBNs and illustrates their use through a simple tutorial on system reliability modelling. The use of BBNs in reliability to date is reviewed. The challenge of using BBNs in reliability practice is explored and areas of research are identified. Copyright © 2001 John Wiley & Sons, Ltd. [ABSTRACT FROM AUTHOR]
- Published
- 2001
- Full Text
- View/download PDF
41. Real-time Reliability Self-assessment in Milling Tools Operation.
- Author
-
Liu, Shujie, Hu, Yawei, Liu, Chi, and Zhang, Hongchao
- Subjects
RELIABILITY in engineering ,MILLING cutters ,ENGINEERING ,MAINTAINABILITY (Engineering) ,MANUFACTURED products - Abstract
To ensure reliable operations, online reliability assessment based on the system monitoring is essential, especially for the critical machineries or components with high safety requirements. The real-time reliability of the milling cutters in practice is one of the examples that decide the total manufacturing effectiveness and the quality of products. The research on how to best estimate cutters' reliability has gained popularity in recent years due to the need in prognostics and health management. The state space model (SSM), employed to recognize the underlying degradation state as a first order Markov chain, is widely used to model the residual life and reliability evaluation. In this paper, non-linear and non-Gaussian SSM are established based on the tool wear condition. The degrading tendency is predicted by the particle filter algorithm, and then the conditional reliability is calculated based on the degradation state and a pre-set threshold. The effectiveness of this approach was proven by a real case study provided. Copyright © 2015 John Wiley & Sons, Ltd. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
42. Acceleration Factor Constant Principle and the Application under ADT.
- Author
-
Wang, Hao ‐ Wei and Xi, Wen ‐ Jun
- Subjects
ACCELERATION (Mechanics) ,ENGINEERING ,MATHEMATICAL variables ,STOCHASTIC analysis ,GAUSSIAN distribution ,SIMULATION methods & models - Abstract
Accelerated degradation test (ADT) has become an efficient approach to evaluate the reliability for highly reliable products. However, when modeling accelerated degradation data with degradation models, it is difficult to exactly figure out the changing rules of parameters with stress variables varying. At present, the changing rules are mainly assumed according to engineering experience or subjective judgements, which probably results in inaccurately extrapolating the reliability. To figure out the changing rules of parameters with stress variables varying, the acceleration factor constant principle and its application under ADT are studied in the paper. It is well known that the acceleration factor between any two different stress levels should be a constant under an effective ADT. For each degradation model, its parameters should obey special changing rules to satisfy that the acceleration factor maintains a constant throughout an ADT. Taking three extensively used stochastic process models as examples, including Wiener process model, gamma process model, and inverse Gaussian process model, the method of deducing the changing rules of the parameters according to the acceleration factor constant principle was demonstrated. A simulation test was conducted to validate the deduced changing rules of the parameters for the three stochastic process models. An illustrative example involving self-regulating heating cables was used to illustrate the application of the acceleration factor constant principle under ADT. Results indicate that the acceleration factor constant principle offers an appealing and credible approach to help model accelerated degradation data. Copyright © 2016 John Wiley & Sons, Ltd. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
43. Reliability and interruption cost prediction using time-dependent failure rates and interruption costs.
- Author
-
Kjølle, Gerd and Holen, Arne T.
- Subjects
RELIABILITY in engineering ,FAILURE time data analysis ,STATISTICS ,MONTE Carlo method ,ENGINEERING - Abstract
The main idea presented and discussed in this paper is a model reproducing a time-dependent component failure rate pattern similar to the observed pattern recorded in failure statistics. This pattern includes all types of failures, caused by the weather or by technical and human aspects. Failure causes and mechanisms are not modelled explicitly and the observed pattern is assumed to be representative for the analysis period ahead. Being able to predict and time-tag component failures, the time-dependent variables of load, repair time and customer-specific interruption costs can be adequately combined to calculate annual reliability indices and interruption costs. This fact also permits us to apply an analytical model which will produce expectation values comparable with average values in a Monte Carlo simulation. © 1998 John Wiley & Sons, Ltd. [ABSTRACT FROM AUTHOR]
- Published
- 1998
- Full Text
- View/download PDF
44. QUALITY ENGINEERING FOR CONTINUOUS PERFORMANCE IMPROVEMENT IN PRODUCTS AND PROCESSES: A REVIEW AND REFLECTIONS.
- Author
-
Dabade, B. M. and Ray, P. K.
- Subjects
QUALITY control ,TECHNOLOGICAL innovations ,METHODOLOGY ,ENGINEERING ,INDUSTRIAL arts ,MANUFACTURING processes - Abstract
In the face of stiff and demanding competition internationally, there is a growing resolve in the industrial community to provide the customers and the market in general products and services of superior quality. Quality engineering has been established as an important continuous performance improvement methodology during the last decade. Consequently, numerous developments and innovative methods have been reported in the literature by researchers and practitioners alike. In this paper, a critical appraisal of recent developments, improvisations, and innovations in the field of quality engineering has been made. It is observed that there exists an immense body of knowledge in terms of better approaches, techniques, methodologies, aids, analysis tools, etc. However, most of these are oriented toward product improvement or process improvement in a localized manner, and a systematic approach for the application of quality engineering encompassing all the important functions in an organization is lacking. In view of this a comprehensive quality engineering methodology is also proposed in this paper so that the implementation of quality engineering in an organization will be more effective, and the benefits to be achieved will be substantial in real-life situations. [ABSTRACT FROM AUTHOR]
- Published
- 1996
- Full Text
- View/download PDF
45. HOT-CARRIER RELIABILITY LIFETIMES AS PREDICTED BY BERKELEY'S MODEL.
- Author
-
Meehan, Alan, O'Sullivan, Paula, Hurley, Paul, and Mathewson, Alan
- Subjects
RELIABILITY in engineering ,ENGINEERING ,HOT carriers ,ELECTRONS ,SEMICONDUCTORS ,ELECTRONIC circuits - Abstract
Hot-carrier effects pose a significant reliability problem in modern MOS processes. An accurate method of predicting hot-carrier lifetimes is essential for the development of fine-geometry MOS technology. A hot-carrier degradation model developed by C. Hu et a!. at the University of Berkeley is widely used to predict device lifetimes at given operating conditions from the results of accelerated tests. This paper demonstrates a new method of performing hot-carrier stress measurements which satisfies the key demand of this model. This method involves adjusting device drain voltage in order to maintain a constant ratio of substrate to drain currents. This method is employed to show that the Berkeley model makes a minimum lifetime prediction which is about an order of magnitude too short at accelerated stress conditions. This casts doubt on the suitability of the Berkeley model for use in circuit reliability simulation and for use in setting industrial reliability benchmarks. A new understanding of the importance of the gate-source voltage during hot-carrier reliability characterization using the Berkeley model is also discussed. [ABSTRACT FROM AUTHOR]
- Published
- 1995
- Full Text
- View/download PDF
46. TESTING FOR RELIABILITY IMPROVEMENT OR DETERIORATION IN REPAIRABLE SYSTEMS.
- Author
-
de la Mare, R. F.
- Subjects
ACCELERATED life testing ,REPAIRING ,SPARE parts ,RELIABILITY in engineering ,STATISTICS ,ENGINEERING - Abstract
This paper reviews the derivation and application of the reverse arrangement test which is used to assess whether a system has improved or deteriorated in a reliability sense. Because the well-known and published table of statistics for this test is erroneous, the prime objective of this paper is to provide the correct statistical table. A secondary aim, however, is to demonstrate how this table can be applied when there is a paucity of failure data for each individual system and to show how a more meaningful reliability assessment can be obtained by pooling results, from the reverse arrangement test, for several systems. [ABSTRACT FROM AUTHOR]
- Published
- 1992
- Full Text
- View/download PDF
47. METHODS FOR THE CONTINUOUS ASSESSMENT OF RELIABILITY OF SPECIALIZED EQUIPMENT.
- Author
-
Watkins, A. Li. and Leech, D. J.
- Subjects
RELIABILITY in engineering ,INDUSTRIAL equipment ,SYSTEMS engineering ,ENGINEERING ,MACHINERY ,QUALITY control - Abstract
This paper is concerned with the assessment of reliability for equipment which is highly specialized. Of particular interest is the analysis of field data early in the life of the fleet of such equipment. Factors affecting the data underlying assessments are outlined, and a basic statistical model for describing these data is introduced. Methods of assessment of equipment reliability, and the precision of such assessments, are discussed, examples of data from this basic model are analysed, and further examples illustrate the role of model parameters. The model may also be used to analyse incomplete data sets, and we consider the penalty incurred at different levels of incompleteness. The paper concludes with a discussion outlining further possible modifications and refinements to the basic model. [ABSTRACT FROM AUTHOR]
- Published
- 1991
- Full Text
- View/download PDF
48. THE ROBUSTNESS OF MARKOV RELIABILITY MODELS.
- Author
-
Edgar, John F. and Bendell, Tony
- Subjects
RELIABILITY in engineering ,ENGINEERING ,MARKOV processes ,STOCHASTIC processes ,MATHEMATICAL models ,SIMULATION methods & models - Abstract
Markov models are an established part of current systems reliability and availability analysis. They are extensively used in various applications, including, in particular, electrical power supply systems. One of their advantages is that they considerably simplify availability evaluation so that the availability of very large and complex systems can be computed. It is generally assumed, with some justification, that the results obtained from such Markov reliability models are relatively robust. It has, however, been known for some time, that practical time to failure distributions are frequently non-exponential, particular attention being given in much reliability work to the Weibull family. Moreover, recently additional doubt has been case on the validity of the Markov approach, both because of the work of Professor Kline and others on the non-exponentiality of practical repair time distribution, and because of the advantages to be obtained in terms of modelling visibility of the alternative simulation approach. In this paper we employ results on the ability of the k-out-of-n systems to span the coherent set to investigate the robustness of Markov reliability models based upon a simulation investigation of coherent systems of up to 10 identical components. We treat the case where adequate repair facilities are available for all components. The effects upon the conventional transient and steady-state measures of Weibull departures from exponentiality are considered. In general, the Markov models are found to be relatively robust, with alterations to failure distributions being more important than those to repair distributions, and decreasing hazard rates more critical than increasing hazard rates. Of the measures studied, the mean time to failure is most sensitive to variations in distributional shape. [ABSTRACT FROM AUTHOR]
- Published
- 1986
- Full Text
- View/download PDF
49. A NEW APPROACH TO RELIABILITY PREDICTION IS NEEDED.
- Author
-
Florescu, Radu A.
- Subjects
RELIABILITY in engineering ,FAILURE time data analysis ,MATHEMATICAL models ,ENGINEERING ,QUALITY control ,ELECTRONICS - Abstract
This paper starts from the main objections regarding MIL-HDBK-2117 and the BELLCORE method for reliability prediction, objections asserting that these methods are approximate, complicated and unconvincing. To support these assertions, and by applying techniques specific to reliability theory, the author has developed a reliability model which is plausible for certain elements of technical systems. The existence of such a model, which in practice is useless because the failure rate expression is too complicated, proves clearly the inefficiency of classical methods. [ABSTRACT FROM AUTHOR]
- Published
- 1986
- Full Text
- View/download PDF
50. Reliability Assessment of High Voltage Direct Current Grid Protection Schemes.
- Author
-
Chetty, L. and Singh, Y.
- Subjects
RELIABILITY in engineering ,ENGINEERING ,DIRECT currents ,ELECTRIC currents ,RENEWABLE energy sources - Abstract
Recent renewable energy developments in the power system have prompted a growing interest in high voltage direct current (DC) grids. One of the challenges impeding the progress of high voltage DC grids is the design of DC protection systems. This paper presents the development of an energy reliability program which differentiates between the energy reliability performances of five different DC protection schemes. The energy reliability program utilized contingency enumeration to iterate through all network states of the high voltage DC grid. The conditions used by the program for network state differentiation were formulated from minimal cut-sets. The energy reliability analysis results discussed in this study illustrate that the double-bus single breaker protection scheme has the most favourable performance. These results contribute knowledge to the design of DC protection schemes of high voltage DC grids. Copyright © 2014 John Wiley & Sons, Ltd. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.