35 results on '"Carol Smidts"'
Search Results
2. A method for systematically developing the knowledge base of reactor operators in nuclear power plants to support cognitive modeling of operator performance
- Author
-
Carol Smidts and Yunfei Zhao
- Subjects
Cognitive model ,021110 strategic, defence & security studies ,021103 operations research ,Traceability ,Process (engineering) ,business.industry ,Computer science ,0211 other engineering and technologies ,02 engineering and technology ,Nuclear power ,Industrial engineering ,Industrial and Manufacturing Engineering ,law.invention ,Operator (computer programming) ,Knowledge base ,law ,Nuclear power plant ,Safety, Risk, Reliability and Quality ,business ,Human reliability - Abstract
Methods based on cognitive modeling have attracted increasing attention in the human reliability analysis community. In such methods, the operator knowledge base plays a central role. This paper proposes a method for systematically developing the knowledge base of nuclear power plant operators. The method starts with a systematic literature review of a predefined topic. Then, the many collected publications are reduced to summaries. Relevant knowledge is then extracted from the summaries using an improved qualitative content analysis method to generate a large number of pieces of knowledge. Lastly, the pieces of knowledge are integrated in a systematic way to generate a knowledge graph consisting of nodes and links. As a case study, the proposed method is applied to develop the knowledge base of reactor operators pertaining to severe accidents in nuclear power plants. The results show that the proposed method exhibits advantages over conventional methods, including reduced reliance on expert knowledge and improved traceability of the process. Generalization of the proposed method to other sources of materials and application of the knowledge base are also discussed. Although this paper is focused on nuclear applications, the proposed method may be extended to other industrial sectors with little additional effort.
- Published
- 2019
- Full Text
- View/download PDF
3. A Deductive Method for Diagnostic Analysis of Digital Instrumentation and Control Systems
- Author
-
Jun Yang, Carol Smidts, and Tunc Aldemir
- Subjects
021110 strategic, defence & security studies ,Markov chain ,business.industry ,Computer science ,0211 other engineering and technologies ,Process (computing) ,02 engineering and technology ,Fault injection ,Reliability engineering ,Software ,Reliability (semiconductor) ,Safety assurance ,Control system ,State (computer science) ,Electrical and Electronic Engineering ,Safety, Risk, Reliability and Quality ,business ,021106 design practice & management - Abstract
Reliability and safety assurance are of supreme importance in the implementation of digital safety-critical control systems. A deductive method integrated with simulation-based fault injection and testing is presented for out-of-range permanent software fault localization. For fault modeling, an input–output mapping scheme is proposed to characterize the behavior of software modules and represent failure modes in an analogous manner to hardware state definitions. The Markov/cell-to-cell-mapping scheme is used for diagnostics. The diagnostic process is illustrated by several case studies for a boiling water reactor feedwater control system. The case study results show that the diagnostic algorithm is capable of software fault localization in the presence of both single and multiple faults.
- Published
- 2018
- Full Text
- View/download PDF
4. Bridging the simulator gap: Measuring motivational bias in digital nuclear power plant environments
- Author
-
Carol Smidts and Rachel Benish Shirley
- Subjects
Measure (data warehouse) ,021103 operations research ,Computer science ,0211 other engineering and technologies ,Crew ,02 engineering and technology ,01 natural sciences ,Control room ,Industrial and Manufacturing Engineering ,Structural equation modeling ,010104 statistics & probability ,Operator (computer programming) ,0101 mathematics ,Safety, Risk, Reliability and Quality ,Set (psychology) ,Simulation ,Causal model ,Human reliability - Abstract
Using digital NPP simulator facilities for research in Human Reliability Analysis (HRA) and nuclear power research is increasingly popular. We propose a method for characterizing and quantifying the gap between data collected in a simulator and data that reflect NPP operations. Using novice operators, we demonstrate how to manipulate and measure the impact of the simulator environment on operator actions. A set of biases are proposed to characterize factors introduced by the simulator environment. There are two categories of simulator bias: Environmental Biases (physical differences between the simulator and the control room), and Motivational Biases (cognitive differences between training in a simulator and operating a NPP). This study examines Motivational Bias. A preliminary causal model of Motivational Biases is introduced and tested in a demonstration experiment using 30 student operators. Data from 41 simulator sessions are analyzed. Data include crew characteristics, operator surveys, and time to recognize and diagnose the accident in the scenario. Quantitative models of the Motivational Biases using Structural Equation Modeling (SEM) are proposed. With these models, we estimate how the effects of the scenario conditions are mediated by simulator bias, and we demonstrate how to quantify the strength of these effects.
- Published
- 2018
- Full Text
- View/download PDF
5. Reliability analysis of passive systems: An overview, status and research expectations
- Author
-
Samuel Abiodun Olatubosun and Carol Smidts
- Subjects
Passive systems ,Nuclear Energy and Engineering ,Computer science ,Energy Engineering and Power Technology ,Safety, Risk, Reliability and Quality ,Waste Management and Disposal ,Reliability (statistics) ,Reliability engineering - Published
- 2022
- Full Text
- View/download PDF
6. Sequential Bayesian inference of transition rates in the hidden Markov model for multi-state system degradation
- Author
-
Wei Gao, Yunfei Zhao, and Carol Smidts
- Subjects
021110 strategic, defence & security studies ,021103 operations research ,Computer science ,Bayesian probability ,0211 other engineering and technologies ,Process (computing) ,02 engineering and technology ,Bayesian inference ,computer.software_genre ,Industrial and Manufacturing Engineering ,Synthetic data ,Data analysis ,Forward algorithm ,Data mining ,Safety, Risk, Reliability and Quality ,Hidden Markov model ,computer ,Importance sampling - Abstract
The more easily available system performance data and advances in data analytics have provided us with opportunities to optimize maintenance programs for engineered systems, for example nuclear power plants. One key task in maintenance optimization is to obtain an accurate model for system degradation. In this research, we propose a Bayesian method to address this problem. Noting that systems usually exhibit multiple states and that the actual state of a system usually is not directly observable, in the method we first model the system degradation process and the observation process based on a hidden Markov model. Then we develop a sequential Bayesian inference algorithm based on importance sampling and the forward algorithm to infer the posterior distributions of the transition rates in the hidden Markov model based on available observations. The proposed Bayesian method allows us to take advantage of evidence from multiple sources, and also allows us to perform Bayesian inference sequentially, without the need to use the entire history of observations every time new observations are collected. We demonstrate the proposed method using both synthetic data for a nuclear power plant feedwater pump and realistic data for a nuclear power plant chemistry analytical device.
- Published
- 2021
- Full Text
- View/download PDF
7. CMS-BN: A cognitive modeling and simulation environment for human performance assessment, part 2 — Application
- Author
-
Yunfei Zhao and Carol Smidts
- Subjects
Cognitive model ,021110 strategic, defence & security studies ,021103 operations research ,Computer science ,0211 other engineering and technologies ,Bayesian network ,Cognition ,02 engineering and technology ,Human performance technology ,Industrial and Manufacturing Engineering ,Reliability engineering ,Pilot-operated relief valve ,Operator (computer programming) ,Systems design ,Safety, Risk, Reliability and Quality ,Human reliability - Abstract
Human operators play a critical role in the operation of complex engineered systems, in particular under abnormal conditions. It is important to assess human performance under conditions of interest and to improve upon the performance by taking effective measures. In this paper we present the application of a previously developed cognitive modeling and simulation environment to address these two problems. The developed environment simulates how a human operator dynamically interacts with the external system, with focus on the operator’s cognitive activities. Based on the Three Mile Island accident, specifically the stuck-open failure of the pilot operated relief valve, we demonstrate how the developed environment can be used for human reliability analysis and human performance improvement. For human reliability analysis, besides providing a single human error probability value the proposed environment allows us to examine in detail how an operator reached a decision or missed an action. For human performance improvement, the proposed environment provides an easy approach for investigating the effects of changes, for example, in operator training and system design, on human performance and therefore identifying the most effective changes to improve human performance.
- Published
- 2021
- Full Text
- View/download PDF
8. CMS-BN: A cognitive modeling and simulation environment for human performance assessment, part 1 — methodology
- Author
-
Yunfei Zhao and Carol Smidts
- Subjects
Cognitive model ,021110 strategic, defence & security studies ,021103 operations research ,Computer science ,Mechanism (biology) ,business.industry ,media_common.quotation_subject ,Monte Carlo method ,0211 other engineering and technologies ,Bayesian network ,Cognition ,02 engineering and technology ,Machine learning ,computer.software_genre ,Human performance technology ,Industrial and Manufacturing Engineering ,Perception ,Artificial intelligence ,Safety, Risk, Reliability and Quality ,business ,computer ,media_common ,Human reliability - Abstract
Cognitive modeling and simulation studies how a human dynamically interacts with the external world. Human performance assessment based on this concept has long been researched in both cognitive sciences and engineering disciplines. However, existing methods have difficulties in describing the uncertain relationships in a human’s knowledge and in considering the uncertainties in the cognitive process. To tackle these issues, we propose a novel cognitive modeling and simulation environment (CMS-BN) by introducing Bayesian networks to represent a human’s knowledge and Monte Carlo simulation to account for the uncertainties in the cognitive process. The proposed environment explicitly models information perception, reasoning and response in a human’s cognitive process. Information perception works as a filtering mechanism to downselect signals from the external world. Reasoning and response are modeled as traversing the human knowledge base represented as a Bayesian network to retrieve knowledge and updating human belief and attention distribution accordingly. Uncertainties in the cognitive process are characterized through Monte Carlo simulation. The proposed environment also models the interplay between the cognitive process and two performance shaping factors, stress and fatigue, though additional factors can be further considered. We expect the proposed environment to be useful in human reliability analysis and human performance improvement.
- Published
- 2021
- Full Text
- View/download PDF
9. Basis for non-propagation domains, their transformations and their impact on software reliability
- Author
-
Chetan Mutha and Carol Smidts
- Subjects
Mechanical system ,Robustness (computer science) ,Computer science ,Software design ,Fault tolerance ,Software system ,Safety, Risk, Reliability and Quality ,Algorithm ,Software quality ,Fault detection and isolation ,Interval arithmetic - Abstract
Fault propagation analysis is an important step in determining system reliability and defining fault tolerance strategies. Typically, in the early software design phases, the propagation probability for a fault is assumed to be one. However, the assumption that faults will always propagate highly underestimates reliability, and valuable resources may be wasted on fixing faults that may never propagate. To determine the fault propagation probability, a concept of flat parts is introduced. A flat part is a property of a function; when multiple functions containing flat parts interact with each other, these flat parts undergo a transformation. During this transformation, the flat parts may be killed, preserved, or new flat parts may be generated. Interval arithmetic-based rules to determine such flat part transformations are introduced. A flat part-based propagation analysis can be used to determine the reliability of a software system, or software-driven mechanical system expressed functionally. In addition, the information obtained through flat part-based propagation analysis can be used to add sensors within the flat parts to increase the probability of fault detection, thus increasing the robustness of the system under study.
- Published
- 2017
- Full Text
- View/download PDF
10. Component detection in piping and instrumentation diagrams of nuclear power plants based on neural networks
- Author
-
Carol Smidts, Yunfei Zhao, and Wei Gao
- Subjects
Artificial neural network ,business.industry ,Computer science ,020209 energy ,Deep learning ,Energy Engineering and Power Technology ,02 engineering and technology ,010501 environmental sciences ,computer.software_genre ,Data structure ,01 natural sciences ,Convolutional neural network ,Object detection ,Nuclear Energy and Engineering ,Component (UML) ,0202 electrical engineering, electronic engineering, information engineering ,Feature (machine learning) ,Data mining ,Instrumentation (computer programming) ,Artificial intelligence ,Safety, Risk, Reliability and Quality ,business ,Waste Management and Disposal ,computer ,0105 earth and related environmental sciences - Abstract
Piping and Instrumentation Diagrams (P&IDs) are the most commonly used engineering drawings to describe components and their relationships, and they are one of the most important inputs for data analysis in Nuclear Power Plants (NPP). In traditional analysis, the information related to the components is extracted manually from the P&IDs. This usually takes large amounts of effort and is error-prone. With the rapid development in the area of computer vision and deep learning, automatically detecting components and their relationships becomes possible. In this paper, we aim to use the latest neural network models to automatically extract information on components and their identifications, from the P&IDs in NPPs. We use a Faster Regional Convolutional Neural Networks (Faster RCNN) architecture called ResNet-50, to detect the components in the P&IDs. Compared to common object detection, object detection for P&IDs poses unique challenges to these methods. For example, the P&IDs symbols are much smaller than the background, and detecting such small objects remains a challenging task for modern neural networks. To address these challenges, we 1) propose several techniques for data augmentation that effectively solve the problem of training data shortage, and 2) propose a feature grouping strategy for detecting components with distinct features. Besides, we introduce a SegLink model for text detection, which can automatically extract components’ identifications from P&IDs. We also develop a method for building a data structure to reflect the relationships between components (e.g., to which pipe a component is connected, or what are the downstream or upstream components of one specific component) based on the extracted information. This data structure can be further used for plant safety analysis, and operation and maintenance cost optimization. Sensitivity analysis and comparison with other Convolutional Neural Networks (CNNs) are performed. The results of these analyses are also discussed in this paper. This analysis framework has been tested on the P&IDs from a commercial NPP. The Average Precision for components, which is used to measure the performance of the proposed method, is about 98%. The success rates of component-text mapping and component-pipe mapping are 270/275 and 319/319, respectively. It is worth noting that this framework is generic and can also be applied to P&IDs of non-nuclear industries.
- Published
- 2020
- Full Text
- View/download PDF
11. Dynamic bayesian networks based abnormal event classifier for nuclear power plants in case of cyber security threats
- Author
-
Diptendu Mohan Kar, Michael C. Pietrykowski, Indrajit Ray, Yunfei Zhao, Xiaoxu Diao, Carol Smidts, Pavan Kumar Vaddi, and Timothy Mabry
- Subjects
Computer science ,020209 energy ,Probabilistic logic ,Programmable logic controller ,Energy Engineering and Power Technology ,Conditional probability ,02 engineering and technology ,Industrial control system ,010501 environmental sciences ,computer.software_genre ,01 natural sciences ,Unobservable ,law.invention ,Nuclear Energy and Engineering ,law ,Nuclear power plant ,0202 electrical engineering, electronic engineering, information engineering ,Data mining ,Safety, Risk, Reliability and Quality ,Waste Management and Disposal ,Classifier (UML) ,computer ,Dynamic Bayesian network ,0105 earth and related environmental sciences - Abstract
With increased adoption of digital systems for instrumentation and control, nuclear power plants have become more vulnerable to cyber-attacks. Such attacks can have very serious implications on a plant's operation, especially if they masquerade as safety events. Thus, it is important that research be focused towards distinguishing cyber-attacks from fault induced safety events for a correct response in a timely manner. In this paper, an event classifier is presented to classify abnormal events in nuclear power plants as either fault induced safety events or cyber-attacks. The process of inferring the hidden (unobservable) state of a system (normal or faulty) through observable physical sensor measurements has been a long-standing industry practice. There has been a recent surge of literature discussing the use of network traffic data for the detection of cyber-attacks on industrial control systems. In the classifier we present, both physical and network behaviors of a nuclear power plant during abnormal events (safety events or cyber-attacks) are used to infer the probabilities of the states of the plant. The nature of the abnormal event in question is then determined based on these probabilities. The Dynamic Bayesian Networks (DBNs) methodology is used for this purpose since it is an appropriate framework for inferring hidden states of a system from observed variables through probabilistic reasoning. In this paper we introduce a DBN based abnormal event classifier and an architecture to implement this classifier as part of a monitoring system. An experimental environment is setup with a two-tank system in conjunction with a nuclear power plant simulator and a programmable logic controller. A set of 27 cyber-attacks and 14 safety events were systematically designed for the experiment. A set of 6 cyber-attacks and 2 safety events were used to manually finetune the Conditional Probability Tables (CPTs) of the 2-timeslice dynamic Bayesian network (2T-DBN). Out of the remaining 33 events, the nature of the abnormal event was successfully identified in all the 33 cases and the location of the cyber-attack or fault was successfully determined in 32 cases. The case-study demonstrates the applicability of the methodology developed. Further research should examine the practicality of implementing the proposed monitoring system on a real-world system and issues associated with cost optimization.
- Published
- 2020
- Full Text
- View/download PDF
12. Finite-horizon semi-Markov game for time-sensitive attack response and probabilistic risk assessment in nuclear power plants
- Author
-
Linan Huang, Quanyan Zhu, Yunfei Zhao, and Carol Smidts
- Subjects
021110 strategic, defence & security studies ,Mathematical optimization ,021103 operations research ,Probabilistic risk assessment ,Computer science ,0211 other engineering and technologies ,02 engineering and technology ,Industrial and Manufacturing Engineering ,law.invention ,Dynamic programming ,symbols.namesake ,law ,Nash equilibrium ,Nuclear power plant ,symbols ,State space ,Safety, Risk, Reliability and Quality ,Solution concept ,Risk assessment ,Game theory - Abstract
Cybersecurity has drawn increasing attention in the nuclear industry. To improve the cyber-security posture, it is important to develop effective methods for cyber-attack response and cyber-security risk assessment. In this research, we develop a finite-horizon semi-Markov general-sum game between the defender (i.e., plant operator) and the attacker to obtain the time-sensitive attack response strategy and the real-time risk assessment in nuclear power plants. We propose methods for identifying system states of concern to reduce the state space and for determining state transition probabilities by integrating probabilistic risk assessment techniques. After a proper discretization of the developed continuous-time model, we use dynamic programming to derive the time-varying and state-dependent strategy of the defender based on the solution concept of the mixed-strategy Nash equilibrium. For risk assessment, three risk metrics are considered, and an exact analytical algorithm and a Monte Carlo simulation-based algorithm for obtaining the metrics are developed. Both players’ strategies and the risk metrics are illustrated using a digital feedwater control system used in pressurized water reactors. The results show that the proposed method can support plant operators in timely cyber-attack response and effective risk assessment, reduce the risk, and improve the resilience of nuclear power plants to malicious cyber-attacks.
- Published
- 2020
- Full Text
- View/download PDF
13. A control-theoretic approach to detecting and distinguishing replay attacks from other anomalies in nuclear power plants
- Author
-
Carol Smidts and Yunfei Zhao
- Subjects
Exploit ,Computer science ,business.industry ,020209 energy ,Energy Engineering and Power Technology ,Watermark ,02 engineering and technology ,010501 environmental sciences ,Nuclear power ,Computer security ,computer.software_genre ,01 natural sciences ,Nuclear Energy and Engineering ,Random noise ,0202 electrical engineering, electronic engineering, information engineering ,Safety, Risk, Reliability and Quality ,business ,Null hypothesis ,Waste Management and Disposal ,Replay attack ,computer ,0105 earth and related environmental sciences - Abstract
The wider use of digital systems in nuclear power plants has raised security concerns to the utilities, the regulatory entities, and the public. These digital systems provide malicious attackers with opportunities to exploit vulnerabilities in these systems to realize physical damages. Replay attacks are a particular type of attacks. They are easy to perform and when combined with other types of attacks can potentially lead to significant consequences. Therefore, timely detection of a replay attack is of vital importance for the operator to take mitigation measures. Besides, it is also important to distinguish between a replay attack and other anomalies, since they are of different nature (one intentional and the other random) and will require different responses by the operator. In this research, we propose a control-theoretic method to address this problem. The method consists of the injection of random noise (i.e. physical watermark) into the control input and two chi-squared tests. The physical watermark is used to excite the system being controlled so that the replay attack, if it exists, can be uncovered. The first chi-squared test is used to detect anomalies in the system, including replay attacks and other anomalies. If the null hypothesis in the first test is rejected, we use the second chi-squared test to determine whether the anomaly is a replay attack or any other anomaly. We demonstrate the proposed method by using a steam generator that can be found in pressurized water reactors. The results of different cases are presented and discussed.
- Published
- 2020
- Full Text
- View/download PDF
14. Development of a quantitative Bayesian network mapping objective factors to subjective performance shaping factor evaluations: An example using student operators in a digital nuclear power plant simulator
- Author
-
Rachel Benish Shirley, Carol Smidts, and Yunfei Zhao
- Subjects
021110 strategic, defence & security studies ,021103 operations research ,Computer science ,Bayesian probability ,0211 other engineering and technologies ,Bayesian network ,Context (language use) ,02 engineering and technology ,Variation (game tree) ,Bayesian inference ,Industrial and Manufacturing Engineering ,law.invention ,law ,Nuclear power plant ,Safety, Risk, Reliability and Quality ,Baseline (configuration management) ,Simulation ,Human reliability - Abstract
Traditional human reliability analysis methods consist of two main steps: assigning values for performance shaping factors (PSFs), and assessing human error probability (HEP) based on PSF values. Both steps rely on expert judgment. Considerable advances have been made in reducing reliance on expert judgment for HEP assessment by incorporating human performance data from various sources (e.g., simulator experiments); however, little has been done to reduce reliance on expert judgment for PSF assignment. This paper introduces a data-driven approach for assessing PSFs in Nuclear Power Plants (NPPs) based on contextual information. The research illustrates how to develop a Bayesian PSF network using data collected from student operators in a NPP simulator. The approach starts with a baseline PSF model that calculates PSF values from context information during an accident scenario. Then, a Bayesian model is developed to link the baseline model to the Subjective PSFs. Two additional factors are included: simulator bias and context information. Results and analysis include variation between the results of the proposed model and the training dataset, and the significance of each element in the model. The proposed approach reduces the reliance of PSF assignment on expert judgment and is particularly suitable for dynamic human reliability analysis.
- Published
- 2020
- Full Text
- View/download PDF
15. Three suggestions on the definition of terms for the safety and reliability analysis of digital systems
- Author
-
Carol Smidts and Man Cheol Kim
- Subjects
Vocabulary ,Task group ,Probabilistic risk assessment ,Computer science ,media_common.quotation_subject ,Control (management) ,Data science ,Industrial and Manufacturing Engineering ,Terminology ,Reliability engineering ,Safety, Risk, Reliability and Quality ,Set (psychology) ,Reliability (statistics) ,media_common ,Dependency (project management) - Abstract
As digital instrumentation and control systems are being progressively introduced into nuclear power plants, a growing number of related technical issues are coming to light needing to be resolved. As a result, an understanding of relevant terms and basic concepts becomes increasingly important. Under the framework of the OECD/NEA WGRISK DIGREL Task Group, the authors were involved in reviewing definitions of terms forming the supporting vocabulary for addressing issues related to the safety and reliability analysis of digital instrumentation and control (SRA of DI&C). These definitions were extracted from various standards regulating the disciplines that form the technical and scientific basis of SRA DI&C. The authors discovered that different definitions are provided by different standards within a common discipline and used differently across various disciplines. This paper raises the concern that a common understanding of terms and basic concepts has not yet been established to address the very specific technical issues facing SRA DI&C. Based on the lessons learned from the review of the definitions of interest and the analysis of dependency relationships existing between these definitions, this paper establishes a set of recommendations for the development of a consistent terminology for SRA DI&C.
- Published
- 2015
- Full Text
- View/download PDF
16. Hardware Error Likelihood Induced by the Operation of Software
- Author
-
Manuel Rodriguez, Ming Li, Carol Smidts, Bing Huang, and Joseph Bernstein
- Subjects
Engineering ,business.industry ,Hardware_PERFORMANCEANDRELIABILITY ,Software quality ,Reliability engineering ,Reliability (semiconductor) ,Embedded software ,Software ,Life-critical system ,CMOS ,Logic gate ,Hardware_INTEGRATEDCIRCUITS ,Transient (computer programming) ,Electrical and Electronic Engineering ,Safety, Risk, Reliability and Quality ,business ,Computer hardware - Abstract
The influence of the software, and its interaction and interdependency with the hardware in the creation and propagation of hardware failures, are usually neglected in reliability analyses of safety critical systems. The software operation is responsible for the usage of semiconductor devices along the system lifetime. This usage consists of voltage changes and current flows that steadily degrade the materials of circuit devices until the degradation becomes permanent, and the device can no longer perform its intended function. At the circuit level, these failures manifest as stuck-at values, signal delays, or circuit functional changes. These failures are permanent in nature. Due to the extremely high scaling of complementary metal-oxide-semiconductor (CMOS) technology into deep submicron regimes, permanent hardware failures are a key concern, and can no longer be neglected compared to transient failures in radiation-intense applications. Our work proposes a methodology for the reliability analysis of permanent failure manifestations of hardware devices due to the usage induced by the execution of embedded software applications. The methodology is illustrated with a case study based on a safety critical application.
- Published
- 2011
- Full Text
- View/download PDF
17. Dynamic reliability of digital-based transmitters
- Author
-
Christophe Bérenguer, Carol Smidts, Florent Brissaud, Anne Barros, Institut National de l'Environnement Industriel et des Risques (INERIS), Laboratoire Modélisation et Sûreté des Systèmes (LM2S), Institut Charles Delaunay (ICD), Université de Technologie de Troyes (UTT)-Centre National de la Recherche Scientifique (CNRS)-Université de Technologie de Troyes (UTT)-Centre National de la Recherche Scientifique (CNRS), and Ohio State University [Columbus] (OSU)
- Subjects
[SPI.OTHER]Engineering Sciences [physics]/Other ,0209 industrial biotechnology ,Engineering ,Computational complexity theory ,020209 energy ,Distributed computing ,Piecewise-deterministic Markov process ,Markov process ,Dynamic reliability ,02 engineering and technology ,Industrial and Manufacturing Engineering ,symbols.namesake ,020901 industrial engineering & automation ,Digital-based system ,0202 electrical engineering, electronic engineering, information engineering ,Safety, Risk, Reliability and Quality ,Information exchange ,Simulation ,Data processing ,Probabilistic dynamics ,Probabilistic risk assessment ,business.industry ,Transmitter ,Petri net ,Intelligent transmitter ,[MATH.MATH-PR]Mathematics [math]/Probability [math.PR] ,Smart sensor ,symbols ,business - Abstract
International audience; Dynamic reliability explicitly handles the interactions between the stochastic behaviour of system components and the deterministic behaviour of process variables. While dynamic reliability provides a more efficient and realistic way to perform probabilistic risk assessment than "static" approaches, its industrial level applications are still limited. Factors contributing to this situation are the inherent complexity of the theory and the lack of a generic platform. More recently the increased use of digital-based systems has also introduced additional modelling challenges related to specific interactions between system components. Typical examples are the "intelligent transmitters" which are able to exchange information, and to perform internal data processing and advanced functionalities. To make a contribution to solving these challenges, the mathematical framework of dynamic reliability is extended to handle the data and information which are processed and exchanged between systems components. Stochastic deviations that may affect system properties are also introduced to enhance the modelling of failures. A formalized Petri net approach is then presented to perform the corresponding reliability analyses using numerical methods. Following this formalism, a versatile model for the dynamic reliability modelling of digital-based transmitters is proposed. Finally the framework's flexibility and effectiveness is demonstrated on a substantial case study involving a simplified model of a nuclear fast reactor.
- Published
- 2011
- Full Text
- View/download PDF
18. Predicting the types and locations of faults introduced during an imperfect repair process and their impact on reliability
- Author
-
Ying Shi and Carol Smidts
- Subjects
geography ,Engineering ,geography.geographical_feature_category ,business.industry ,Strategy and Management ,Software development ,Process (computing) ,Fault (geology) ,Application software ,computer.software_genre ,Software quality ,Reliability engineering ,Software ,Software fault tolerance ,Software reliability testing ,Safety, Risk, Reliability and Quality ,business ,computer - Abstract
Imperfect debugging of software development faults (called primary faults) will lead to the creation of new software faults denoted secondary faults. Secondary faults are typically fewer in numbers than the initial primary faults and are introduced late in the testing phase. As such it is unlikely that they will be observed during testing and their failure characteristics are unlikely to be assessed accurately. This is an issue since they may possibly display different propagation characteristics than the primary faults that led to their creation. In particular their location will be distributed non-uniformly around the fault being fixed. This paper proposes a methodology to assess the impact of secondary faults on reliability-based on predicting their possible types and locations. The methodology combines a fault taxonomy, code mutation and Bayesian statistics. The methodology is applied to portions of the application software code of a nuclear reactor protection system. This paper concludes with a discussion on the integration of the results within existing Software Reliability Growth Models.
- Published
- 2010
- Full Text
- View/download PDF
19. Probabilistic risk assessment framework for software propagation analysis of failures
- Author
-
Y Wei, Carol Smidts, and Manuel Rodriguez
- Subjects
Engineering ,Probabilistic risk assessment ,Life-critical system ,business.industry ,Component (UML) ,Component-based software engineering ,Risk-based testing ,Software reliability testing ,Safety, Risk, Reliability and Quality ,business ,Software quality ,Software metric ,Reliability engineering - Abstract
Probabilistic risk assessment (PRA) is a methodology consisting of techniques to assess the probability of failure or success of a system. It has been proven to be a systematic, logical, and comprehensive methodology for risk assessment. However, the contribution of software to risk has not been well studied. To address this shortcoming, recent research has focused on the development of an approach to systematically integrate software risk contributions into the PRA framework. The latter research has identified as key the need to quantify various major software-failure-related contributions to risk. Of these contributions, the quantification of input failures is the topic of this paper. An input failure consists of a failure of a system component directly or indirectly connected to a software component, which reaches the software input and propagates through the software component. The paper studies and quantifies the impact of input failures on the software component and then further on in the system, and outlines a framework to systematically conduct such an analysis. An application to a safety-critical system is also provided that illustrates the application of the concepts introduced in the paper.
- Published
- 2010
- Full Text
- View/download PDF
20. A framework to integrate software behavior into dynamic probabilistic risk assessment
- Author
-
Dongfeng Zhu, Ali Mosleh, and Carol Smidts
- Subjects
Risk analysis ,Object-oriented programming ,Engineering ,Probabilistic risk assessment ,business.industry ,Modeling language ,System safety ,Computer security ,computer.software_genre ,Industrial and Manufacturing Engineering ,Software ,Safety, Risk, Reliability and Quality ,Software engineering ,business ,Representation (mathematics) ,Reactive system ,computer - Abstract
Software plays an increasingly important role in modern safety-critical systems. Although, research has been done to integrate software into the classical probabilistic risk assessment (PRA) framework, current PRA practice overwhelmingly neglects the contribution of software to system risk. Dynamic probabilistic risk assessment (DPRA) is considered to be the next generation of PRA techniques. DPRA is a set of methods and techniques in which simulation models that represent the behavior of the elements of a system are exercised in order to identify risks and vulnerabilities of the system. The fact remains, however, that modeling software for use in the DPRA framework is also quite complex and very little has been done to address the question directly and comprehensively. This paper develops a methodology to integrate software contributions in the DPRA environment. The framework includes a software representation, and an approach to incorporate the software representation into the DPRA environment SimPRA. The software representation is based on multi-level objects and the paper also proposes a framework to simulate the multi-level objects in the simulation-based DPRA environment. This is a new methodology to address the state explosion problem in the DPRA environment. This study is the first systematic effort to integrate software risk contributions into DPRA environments.
- Published
- 2007
- Full Text
- View/download PDF
21. QRAS—the quantitative risk assessment system
- Author
-
Ali Mosleh, Carol Smidts, and Frank J. Groen
- Subjects
Risk analysis ,Engineering ,Operations research ,business.industry ,Binary decision diagram ,Event (computing) ,Software tool ,Probabilistic logic ,Industrial and Manufacturing Engineering ,Bridge (nautical) ,Set (abstract data type) ,Risk analysis (engineering) ,Safety, Risk, Reliability and Quality ,business ,Risk assessment - Abstract
This paper presents an overview of QRAS, the Quantitative Risk Assessment System. QRAS is a PC-based software tool for conducting Probabilistic Risk Assessments (PRAs), which was developed to address risk analysis needs held by NASA. QRAS is, however, applicable in a wide range of industries. The philosophy behind the development of QRAS is to bridge communication and skill gaps between managers, engineers, and risk analysts by using representations of the risk model and analysis results that are easy to comprehend by each of those groups. For that purpose, event sequence diagrams (ESD) are being used as a replacement for event trees (ET) to model scenarios, and the quantification of events is possible through a set of quantification models familiar to engineers. An automated common cause failure (CCF) modeling tool further aids the risk modeling. QRAS applies BDD-based algorithms for the accurate and efficient computation of risk results. The paper presents QRAS' modeling and analysis capabilities. The performance of the underlying BDD algorithm is assessed and compared to that of another PRA software tool, using a case study extracted from the International Space Station PRA.
- Published
- 2006
- Full Text
- View/download PDF
22. Special Section on the 25th IEEE International Symposium on Software Reliability Engineering (ISSRE 2014)
- Author
-
Roberto Pietrantuono, Katerina Goseva-Popstojanova, and Carol Smidts
- Subjects
business.industry ,Computer science ,Systems engineering ,Special section ,Electrical and Electronic Engineering ,Safety, Risk, Reliability and Quality ,Software engineering ,business ,Software quality - Abstract
The papers in this special section were presented at the 2014 International Symposium on Software Reliability Engineering (ISSRE) that was held from 3 to 6 November 2014 in Naples, Italy.
- Published
- 2017
- Full Text
- View/download PDF
23. A stochastic model of fault introduction and removal during software development
- Author
-
M. Stutzke and Carol Smidts
- Subjects
Computer science ,business.industry ,Software development ,Software quality ,Software metric ,Reliability engineering ,Software sizing ,Software construction ,Goal-Driven Software Development Process ,Software quality analyst ,Software reliability testing ,Electrical and Electronic Engineering ,Safety, Risk, Reliability and Quality ,business - Abstract
Two broad categories of human error occur during software development: (1) development errors made during requirements analysis, design, and coding activities; (2) debugging errors made during attempts to remove faults identified during software inspections and dynamic testing. This paper describes a stochastic model that relates the software failure intensity function to development and debugging error occurrence throughout all software life-cycle phases. Software failure intensity is related to development and debugging errors because data on development and debugging errors are available early in the software life-cycle and can be used to create early predictions of software reliability. Software reliability then becomes a variable which can be controlled up front, viz, as early as possible in the software development life-cycle. The model parameters were derived based on data reported in the open literature. A procedure to account for the impact of influencing factors (e.g., experience, schedule pressure) on the parameters of this stochastic model is suggested. This procedure is based on the success likelihood methodology (SLIM). The stochastic model is then used to study the introduction and removal of faults and to calculate the consequent failure intensity value of a small-software developed using a waterfall software development.
- Published
- 2001
- Full Text
- View/download PDF
24. Identification of missing scenarios in ESDs using probabilistic dynamics
- Author
-
Carol Smidts and S. Swaminathan
- Subjects
Scope (project management) ,Computer science ,Monte Carlo method ,Probabilistic logic ,Hardware_PERFORMANCEANDRELIABILITY ,Variance (accounting) ,computer.software_genre ,Industrial and Manufacturing Engineering ,Prime (order theory) ,Identification (information) ,Dynamics (music) ,Hardware_INTEGRATEDCIRCUITS ,Variance reduction ,Data mining ,Safety, Risk, Reliability and Quality ,computer ,Simulation ,Hardware_LOGICDESIGN - Abstract
The event sequence diagram (ESD) framework can be used to qualitatively represent dynamic scenarios. The solution of ESDs can be performed in an analytical manner. Since the construction of ESDs has some inherent analyst dependence, there is scope for omitting scenarios due to certain simplifying assumptions. This is one of the prime drawbacks of the ESD framework. This paper presents an approach for identifying missing scenarios by combining ESDs with probabilistic dynamics. The approach also helps in reducing the variance of a Monte Carlo simulation procedures.
- Published
- 1999
- Full Text
- View/download PDF
25. The mathematical formulation for the event sequence diagram framework
- Author
-
S. Swaminathan and Carol Smidts
- Subjects
Fault tree analysis ,Theoretical computer science ,Probabilistic risk assessment ,Computer science ,Reliability (computer networking) ,Diagram ,Probabilistic logic ,Markov process ,Industrial and Manufacturing Engineering ,symbols.namesake ,Probabilistic CTL ,symbols ,Safety, Risk, Reliability and Quality ,Probabilistic relevance model - Abstract
The Event Sequence Diagram (ESD) framework allows modeling of dynamic situations of interest to PRA analysts. A qualitative presentation of the framework was given in an earlier article. The mathematical formulation for the components of the ESD framework is described in this article. The formulation was derived from the basic probabilistic dynamics equations. For tackling certain dynamic non-Markovian situations, the probabilistic dynamics framework was extended. The mathematical treatment of dependencies among fault trees in a multi layered ESD framework is also presented.
- Published
- 1999
- Full Text
- View/download PDF
26. An architectural model for software reliability quantification: sources of data
- Author
-
D. Sova and Carol Smidts
- Subjects
Computer science ,business.industry ,Software development ,Software requirements specification ,computer.software_genre ,Industrial and Manufacturing Engineering ,Reliability engineering ,Software framework ,Software sizing ,Software construction ,Software reliability testing ,Software verification and validation ,Reference architecture ,Safety, Risk, Reliability and Quality ,business ,computer - Abstract
Software reliability assessment models in use today treat software as a monolithic block. An aversion towards `atomic' models seems to exist. These models appear to add complexity to the modeling, to the data collection and seem intrinsically difficult to generalize. In 1997, we introduced an architecturally based software reliability model called FASRE. The model is based on an architecture derived from the requirements which captures both functional and nonfunctional requirements and on a generic classification of functions, attributes and failure modes. The model focuses on evaluation of failure mode probabilities and uses a Bayesian quantification framework. Failure mode probabilities of functions and attributes are propagated to the system level using fault trees. It can incorporate any type of prior information such as results of developers' testing, historical information on a specific functionality and its attributes, and, is ideally suited for reusable software. By building an architecture and deriving its potential failure modes, the model forces early appraisal and understanding of the weaknesses of the software, allows reliability analysis of the structure of the system, provides assessments at a functional level as well as at a systems' level. In order to quantify the probability of failure (or the probability of success) of a specific element of our architecture, data are needed. The term element of the architecture is used here in its broadest sense to mean a single failure mode or a higher level of abstraction such as a function. The paper surveys the potential sources of software reliability data available during software development. Next the mechanisms for incorporating these sources of relevant data to the FASRE model are identified.
- Published
- 1999
- Full Text
- View/download PDF
27. The Event Sequence Diagram framework for dynamic Probabilistic Risk Assessment
- Author
-
S. Swaminathan and Carol Smidts
- Subjects
Event tree ,Fault tree analysis ,Probabilistic risk assessment ,Computer science ,business.industry ,Diagram ,Probabilistic logic ,computer.software_genre ,Machine learning ,Industrial and Manufacturing Engineering ,System dynamics ,Data mining ,Artificial intelligence ,Safety, Risk, Reliability and Quality ,Representation (mathematics) ,business ,computer ,Simple (philosophy) - Abstract
Dynamic methodologies have become fairly established in academia. Their superiority over classical methods like Event Tree/Fault Tree techniques has been demonstrated. Despite this, dynamic methodologies have not enjoyed the support of the industry. One of the primary reasons for the lack of acceptance in the industry is that there is no easy way to qualitatively represent dynamic scenarios. This paper proposes to extend current Event Sequence Diagrams (ESDs) to allow modeling of dynamic situations. Under the proposed ESD representation, ESDs can be used in combination with dynamic methodology computational algorithms which will solve the underlying probabilistic dynamics equations. Once engineers are able to translate their knowledge of the system dynamics and accident evolution into simple ESDs, usage of dynamic methodologies will become more popular.
- Published
- 1999
- Full Text
- View/download PDF
28. Software reliability modeling: an approach to early reliability prediction
- Author
-
R.W. Stoddard, M. Stutzke, and Carol Smidts
- Subjects
Computer science ,business.industry ,Empirical process (process control model) ,Software development ,Software quality ,Software metric ,Reliability engineering ,Software development process ,Goal-Driven Software Development Process ,Software reliability testing ,Software verification and validation ,Electrical and Electronic Engineering ,Safety, Risk, Reliability and Quality ,business - Abstract
Models for predicting software reliability in the early phases of development are of paramount importance since they provide early identification of cost overruns, software development process issues, optimal development strategies, etc. A few models geared towards early reliability prediction, applicable to well defined domains, have been developed during the 1990s. However, many questions related to early prediction are still open, and more research in this area is needed, particularly for developing a generic approach to early reliability prediction. This paper presents an approach to predicting software reliability based on a systematic identification of software process failure modes and their likelihoods. A direct consequence of the approach and its supporting data collection efforts is the identification of weak areas in the software development process. A Bayes framework for the quantification of software process failure mode probabilities can be useful since it allows use of historical data that are only partially relevant to the software at hand. The key characteristics of the approach should apply to other software-development life-cycles and phases. However, it is unclear how difficult the implementation of the approach would be, and how accurate the predictions would be. Further research will help answer these questions.
- Published
- 1998
- Full Text
- View/download PDF
29. The Cassini Mission probabilistic risk analysis: Comparison of two probabilistic dynamic methodologies
- Author
-
Robert J. Mulvihill, S. Swaminathan, Ali Mosleh, Bruce L. Bream, Carol Smidts, Steve Bell, Kevin Rudolph, and J.-Y. Van-Halle
- Subjects
Event tree ,Alternative methods ,Probabilistic risk assessment ,Scale (ratio) ,Computer science ,Monte Carlo method ,Probabilistic logic ,Safety, Risk, Reliability and Quality ,Industrial and Manufacturing Engineering ,Reliability engineering - Abstract
This paper describes a comparison between two dynamic methodologies used in the probabilistic risk analysis (PRA) of the Cassini Mission. The main Cassini PRA was performed by Lockheed Martin. A combination of Monte Carlo algorithms and event-tree logic was used to perform the study. Results were validated using an alternative method, the Discrete Dynamic Event Tree (DDET) methodology. Two major conclusions of the paper are 1) performing a dynamic PRA of large scale ‘real-life’ systems is feasible and 2) given the same ground rules and assumptions, two dynamic methodologies would give the same results.
- Published
- 1997
- Full Text
- View/download PDF
30. A methodology for collection and analysis of human error data based on a cognitive model: IDA
- Author
-
Ali Mosleh, Song-Hua Shen, and Carol Smidts
- Subjects
Cognitive model ,Nuclear and High Energy Physics ,Engineering ,Data collection ,Probabilistic risk assessment ,Event (computing) ,business.industry ,Mechanical Engineering ,Human error ,Machine learning ,computer.software_genre ,Nuclear Energy and Engineering ,Taxonomy (general) ,Forensic engineering ,General Materials Science ,Artificial intelligence ,Safety, Risk, Reliability and Quality ,business ,Root cause analysis ,Waste Management and Disposal ,computer ,Human error assessment and reduction technique - Abstract
This paper presents a model-based human error taxonomy and data collection. The underlying model, IDA (described in two companion papers), is a cognitive model of behavior developed for analysis of the actions of nuclear power plant operating crew during abnormal situations. The taxonomy is established with reference to three external reference points (i.e. plant status, procedures, and crew) and four reference points internal to the model (i.e. information collected, diagnosis, decision, action). The taxonomy helps the analyst: (1) recognize errors as such; (2) categorize the error in terms of generic characteristics such as ‘error in selection of problem solving strategies’ and (3) identify the root causes of the error. The data collection methodology is summarized in post event operator interview and analysis summary forms. The root cause analysis methodology is illustrated using a subset of an actual event. Statistics, which extract generic characteristics of error prone behaviors and error prone situations are presented. Finally, applications of the human error data collection are reviewed. A primary benefit of this methodology is to define better symptom-based and other auxiliary procedures with associated training to minimize or preclude certain human errors. It also helps in design of control rooms, and in assessment of human error probabilities in the probabilistic risk assessment framework.
- Published
- 1997
- Full Text
- View/download PDF
31. Probabilistic dynamics: A comparison between continuous event trees and a discrete event tree model
- Author
-
Carol Smidts
- Subjects
Set (abstract data type) ,Event tree ,Mathematical optimization ,Component (UML) ,Stability (learning theory) ,Probabilistic logic ,Special case ,Safety, Risk, Reliability and Quality ,Algorithm ,Industrial and Manufacturing Engineering ,Mathematics ,Event (probability theory) ,Variable (mathematics) - Abstract
The feeling that dynamics and their interaction with the random evolution of parameters was ill-treated in classical probabilistic safety assessment methodologies led to the development of probabilistic dynamics methodologies. These methods explicitly model the mutual influence between physical variables, operators and components, using different basic assumptions. This paper is a first attempt at a systematic comparison between two such methodologies, namely, DYLAM and the continuous event tree (CET) theory on a simple problem. The problem involves one bistate component and one physical variable whose evolution depends on the component current state and that should not, in any case, cross a prespecified threshold. The methods are briefly discussed. In particular, we show how DYLAM can be derived as a special case of the CET theory. The numerical implementation of each method is also reviewed. Each method is then applied to the specific problem. The probability of system failure over time is compared to its real, analytically derived, value. We focus on key issues such as exactness, stability and efficiency. We point out the main differences between the methods and draw a first set of conclusions as to their respective fields of application, recognizing, however, that the analysis should be carried further on more complex problems to reach definitive conclusions.
- Published
- 1994
- Full Text
- View/download PDF
32. Hazard and Operability (HAZOP) Analysis of Safety-Related Scientific Software
- Author
-
Jatin Gupta, Michael Allocco, Gerry McCartor, Carol Smidts, and Xiang Li
- Subjects
Hazard (logic) ,Finite-state machine ,Correctness ,Operability ,General Computer Science ,Computer science ,Process (engineering) ,Hazard and operability study ,media_common.quotation_subject ,Energy Engineering and Power Technology ,Aerospace Engineering ,Industrial and Manufacturing Engineering ,Reliability engineering ,Nuclear Energy and Engineering ,Electrical and Electronic Engineering ,Safety, Risk, Reliability and Quality ,Representation (mathematics) ,Function (engineering) ,media_common - Abstract
Hazard and operability (HAZOP) analysis technique is used to identify and analyze hazards and operational concerns of a system. It provides a structured framework that can be used to perform a step-by-step safety analysis of a system. This paper details how to apply this method to safety-related scientific software. In this paper, we have developed (1) a nomenclature that singles out 30 primary concepts (2) a canonic set of abstractions of software programming constructs as a function of the primary concepts; (3) a process of translation from an existing design representation to the target design representation in the form of finite state machines; (4) HAZOP templates for each canonical form; and (5) an input variable prioritization method. We also developed a computational tool that can be used to perform HAZOP analysis of scientific software. Its results are compared with those obtained during manual HAZOP analysis by calculating the value of Shannon entropy, correctness, and the time required to perform each analysis. Overall, this method helps identify useful information about the impact of variables in the code that can then be utilized to develop robust code for making safety-critical decisions.
- Published
- 2015
- Full Text
- View/download PDF
33. Integrating software into PRA: a software-related failure mode taxonomy
- Author
-
Bin Li, Carol Smidts, Ming Li, and Ken Chen
- Subjects
Engineering ,Probabilistic risk assessment ,business.industry ,Scheduling (computing) ,Probability of failure ,Software ,Risk analysis (engineering) ,System failure ,Physiology (medical) ,Systems engineering ,Safety, Risk, Reliability and Quality ,business ,Risk assessment ,Failure mode and effects analysis - Abstract
Probabilistic risk assessment is a methodology to assess the probability of failure or success of a mission. Results provided by the risk assessment methodology are used to make decisions concerning choice of upgrades, scheduling of maintenance, decision to launch, etc. However, current PRA neglects the contribution of software to the risk of failure of the mission. Our research has developed a methodology to account for the impact of software to system failure. This article focuses on an element of the approach: a comprehensive taxonomy of software-related failure modes. Application of the taxonomy is discussed in this article. A validation of the taxonomy and conclusions drawn from this validation effort are described. Future research is also summarized.
- Published
- 2006
34. Integrating software into PRA: a test-based approach
- Author
-
Carol Smidts, Ming Li, and Bin Li
- Subjects
Engineering ,Probabilistic risk assessment ,business.industry ,Risk-based testing ,Software metric ,Reliability engineering ,Life-critical system ,Physiology (medical) ,Software construction ,Software quality analyst ,Software reliability testing ,Software verification and validation ,Safety, Risk, Reliability and Quality ,business - Abstract
Probabilistic risk assessment (PRA) is a methodology to assess the probability of failure or success of a system's operation. PRA has been proved to be a systematic, logical, and comprehensive technique for risk assessment. Software plays an increasing role in modern safety critical systems. A significant number of failures can be attributed to software failures. Unfortunately, current probabilistic risk assessment concentrates on representing the behavior of hardware systems, humans, and their contributions (to a limited extent) to risk but neglects the contributions of software due to a lack of understanding of software failure phenomena. It is thus imperative to consider and model the impact of software to reflect the risk in current and future systems. The objective of our research is to develop a methodology to account for the impact of software on system failure that can be used in the classical PRA analysis process. A test-based approach for integrating software into PRA is discussed in this article. This approach includes identification of software functions to be modeled in the PRA, modeling of the software contributions in the ESD, and fault tree. The approach also introduces the concepts of input tree and output tree and proposes a quantification strategy that uses a software safety testing technique. The method is applied to an example system, PACS.
- Published
- 2005
35. Predicting residual software fault content and their location during multi-phase functional testing using test coverage
- Author
-
Carol Smidts and Ying Shi
- Subjects
Engineering ,business.industry ,White-box testing ,Software performance testing ,Manual testing ,computer.software_genre ,Stress testing (software) ,Reliability engineering ,Modified condition/decision coverage ,Non-regression testing ,Regression testing ,Data mining ,Software reliability testing ,Safety, Risk, Reliability and Quality ,business ,computer - Abstract
Multi-phase functional testing is a common practice which is used in ultra-reliable software development to ensure that no known faults reside in the software to be delivered. In this paper, we present a new test coverage-based model which allows the description of software systems developed through multiple phases of functional testing. This model is further extended: (a) to take advantage of auxiliary observations collected during the multi-phase testing and consequent analysis process to refine the predictions made; (b) to describe software systems where either the initial fault distribution is non-uniform with respect to location, or the repair and test and detection process favour certain locations.
- Published
- 2013
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.