118 results on '"Carol Smidts"'
Search Results
2. The research challenges in security and safeguards for nuclear fission batteries
- Author
-
Carol Smidts, Gustavo Reyes, Cassiano Endres de Oliveira, and Lei Raymond Cao
- Subjects
Nuclear Energy and Engineering ,Energy Engineering and Power Technology ,Safety, Risk, Reliability and Quality ,Waste Management and Disposal - Published
- 2023
- Full Text
- View/download PDF
3. An ontology-based fault generation and fault propagation analysis approach for safety-critical computer systems at the design stage
- Author
-
Xiaoxu Diao, Mike Pietrykowski, Fuqun Huang, Chetan Mutha, and Carol Smidts
- Subjects
Artificial Intelligence ,Industrial and Manufacturing Engineering - Abstract
Fault propagation analysis is a process used to determine the consequences of faults residing in a computer system. A typical computer system consists of diverse components (e.g., electronic and software components), thus, the faults contained in these components tend to possess diverse characteristics. How to describe and model such diverse faults, and further determine fault propagation through different components are challenging problems to be addressed in the fault propagation analysis. This paper proposes an ontology-based approach, which is an integrated method allowing for the generation, injection, and propagation through inference of diverse faults at an early stage of the design of a computer system. The results generated by the proposed framework can verify system robustness and identify safety and reliability risks with limited design level information. In this paper, we propose an ontological framework and its application to analyze an example safety-critical computer system. The analysis result shows that the proposed framework is capable of inferring fault propagation paths through software and hardware components and is effective in predicting the impact of faults.
- Published
- 2022
- Full Text
- View/download PDF
4. Experimental Testbeds and Design of Experiments
- Author
-
Carol Smidts, Indrajit Ray, Quanyan Zhu, Pavan Kumar Vaddi, Yunfei Zhao, Linan Huang, Xiaoxu Diao, Rakibul Talukdar, and Michael C. Pietrykowski
- Published
- 2022
- Full Text
- View/download PDF
5. Machine Learning-Based Abnormal Event Detection and Classification
- Author
-
Carol Smidts, Indrajit Ray, Quanyan Zhu, Pavan Kumar Vaddi, Yunfei Zhao, Linan Huang, Xiaoxu Diao, Rakibul Talukdar, and Michael C. Pietrykowski
- Published
- 2022
- Full Text
- View/download PDF
6. Introduction
- Author
-
Carol Smidts, Indrajit Ray, Quanyan Zhu, Pavan Kumar Vaddi, Yunfei Zhao, Linan Huang, Xiaoxu Diao, Rakibul Talukdar, and Michael C. Pietrykowski
- Published
- 2022
- Full Text
- View/download PDF
7. Cyber-Security Threats and Response Models in Nuclear Power Plants
- Author
-
Carol Smidts, Indrajit Ray, Quanyan Zhu, Pavan Kumar Vaddi, Yunfei Zhao, Linan Huang, Xiaoxu Diao, Rakibul Talukdar, and Michael C. Pietrykowski
- Published
- 2022
- Full Text
- View/download PDF
8. Game-Theoretic Design of Response Systems
- Author
-
Carol Smidts, Indrajit Ray, Quanyan Zhu, Pavan Kumar Vaddi, Yunfei Zhao, Linan Huang, Xiaoxu Diao, Rakibul Talukdar, and Michael C. Pietrykowski
- Published
- 2022
- Full Text
- View/download PDF
9. Probabilistic Risk Assessment: Nuclear Power Plants and Introduction to the Context of Cybersecurity
- Author
-
Carol Smidts, Indrajit Ray, Quanyan Zhu, Pavan Kumar Vaddi, Yunfei Zhao, Linan Huang, Xiaoxu Diao, Rakibul Talukdar, and Michael C. Pietrykowski
- Published
- 2022
- Full Text
- View/download PDF
10. Big Data For Operation and Maintenance Cost Reduction
- Author
-
Carol Smidts, Yunfei Zhao, Xiaoxu Diao, Pavan Kumar Vaddi, Michael Pietrykowski, Wei Gao, Jaydeep Thik, Marat Khafizov, Yeni Li, and Hany Abdel-Khalik
- Published
- 2021
- Full Text
- View/download PDF
11. A requirements inspection method based on scenarios generated by model mutation and the experimental validation
- Author
-
Wei Gao, Boyuan Li, Carol Smidts, and Xiaoxu Diao
- Subjects
Software ,business.industry ,Computer science ,Systems development life cycle ,Extended finite-state machine ,Mutation (genetic algorithm) ,Software requirements specification ,Usability ,Functional requirement ,Software requirements ,business ,Reliability engineering - Abstract
The requirements phase is the most critical phase of the software development life cycle. The quality of the requirements specification affects the overall quality of the subsequent phases and hence, the software product. An effective and efficient method to qualify the software requirements specification (SRS) is necessary to ensure the reliability and safety of software. In this paper, a requirements inspection method based on scenarios generated by model mutation (RIMSM) is proposed to detect defects in the functional requirements of a safety-critical system. The RIMSM method models software requirements using a High Level Extended Finite State Machine (HLEFSM). A method that executes the HLEFSM model is defined. The method uncovers the behaviors and generates the outputs of the system for a given scenario. To identify an adequate set of scenarios in which the model shall be executed, an analogue to mutation testing is defined which applies to the requirements phase. Twenty-one mutation operators are designed based on a taxonomy of defects defined for the requirements phase. Mutants of the HLEFSM model are generated using these operators. Further, an algorithm is developed to identify scenarios that can kill the mutants. The set of scenarios is considered to be adequate for detecting defects in the model when all mutants generated are killed. The HLEFSM model is then executed for the scenarios generated. The results of execution are used to detect defects in the model. A Requirements Inspection Tool based on Scenarios Generated by Model Mutation (RITSM) is developed to automate the application of the RIMSM method. The performance and usability of the RIMSM method are studied and demonstrated in an experiment by comparing the RIMSM method to the checklist-based reading method.
- Published
- 2021
- Full Text
- View/download PDF
12. A method for systematically developing the knowledge base of reactor operators in nuclear power plants to support cognitive modeling of operator performance
- Author
-
Carol Smidts and Yunfei Zhao
- Subjects
Cognitive model ,021110 strategic, defence & security studies ,021103 operations research ,Traceability ,Process (engineering) ,business.industry ,Computer science ,0211 other engineering and technologies ,02 engineering and technology ,Nuclear power ,Industrial engineering ,Industrial and Manufacturing Engineering ,law.invention ,Operator (computer programming) ,Knowledge base ,law ,Nuclear power plant ,Safety, Risk, Reliability and Quality ,business ,Human reliability - Abstract
Methods based on cognitive modeling have attracted increasing attention in the human reliability analysis community. In such methods, the operator knowledge base plays a central role. This paper proposes a method for systematically developing the knowledge base of nuclear power plant operators. The method starts with a systematic literature review of a predefined topic. Then, the many collected publications are reduced to summaries. Relevant knowledge is then extracted from the summaries using an improved qualitative content analysis method to generate a large number of pieces of knowledge. Lastly, the pieces of knowledge are integrated in a systematic way to generate a knowledge graph consisting of nodes and links. As a case study, the proposed method is applied to develop the knowledge base of reactor operators pertaining to severe accidents in nuclear power plants. The results show that the proposed method exhibits advantages over conventional methods, including reduced reliance on expert knowledge and improved traceability of the process. Generalization of the proposed method to other sources of materials and application of the knowledge base are also discussed. Although this paper is focused on nuclear applications, the proposed method may be extended to other industrial sectors with little additional effort.
- Published
- 2019
- Full Text
- View/download PDF
13. Automated Identification of Causal Relationships in Nuclear Power Plant Event Reports
- Author
-
Xiaoxu Diao, Yunfei Zhao, Carol Smidts, and Jonathon Huang
- Subjects
Nuclear and High Energy Physics ,Event (computing) ,Computer science ,business.industry ,020209 energy ,02 engineering and technology ,Nuclear power ,Condensed Matter Physics ,Data science ,law.invention ,Identification (information) ,020303 mechanical engineering & transports ,0203 mechanical engineering ,Nuclear Energy and Engineering ,Licensee ,law ,Nuclear power plant ,0202 electrical engineering, electronic engineering, information engineering ,business - Abstract
A large number of licensee event reports are available in the nuclear power generation sector. A comprehensive analysis of the reports will provide valuable insights for improving nuclear power pla...
- Published
- 2019
- Full Text
- View/download PDF
14. A Model-Based Symbolic Inference for Sensor Deployment Optimization for Fault Detection of the EBR-II Reactor
- Author
-
Xiaoxu Diao, Pavan Vaddi, Boyuan Li, Wei Gao, and Carol Smidts
- Published
- 2021
- Full Text
- View/download PDF
15. A propagation-based fault detection and discrimination method and the optimization of sensor deployment
- Author
-
Carol Smidts, Boyuan Li, Wei Gao, Pavan Kumar Vaddi, and Xiaoxu Diao
- Subjects
Nuclear Energy and Engineering ,Computer science ,Software deployment ,Real-time computing ,Experimental Breeder Reactor II ,Genetic algorithm ,Process (computing) ,Brute-force search ,Fault (power engineering) ,Fault detection and isolation ,System model - Abstract
Industrial processes can be affected by faults having a serious impact on operation when not promptly detected and diagnosed. In this paper, a propagation-basedfaultdetectionanddiscrimination(PFDD) method is proposed to develop a strategy for fault diagnosis while in the design phase of a system. The PFDD method constructs the system model using the IntegratedSystemFaultAnalysis(ISFA) technique. Based on the system model, the propagation of hardware and software faults are simulated qualitatively. Given the results of the simulation, the process by which a fault propagates can be characterized using the qualitative features of system variables including the deviation of the system variables from their expected values, the variation of the system variables over time, and the order in which each variable is influenced during the propagation of the fault. The strategy by which a fault can be detected and discriminated is defined using those features. The PFDD method supports the detection and discrimination of faults in both steady states and transient states. Based on the PFDD method, the optimization of sensor deployment in a system is discussed. A brute force algorithm is developed to examine the system’s capability at diagnosing faults and the cost of sensor deployment for all possible configurations of sensors. The optimal sensor deployment strategy can be derived accordingly. However, the brute force method is only applicable to small-scale systems due to its high computational cost. A genetic algorithm is used to optimize sensor deployment in large-scale systems. The PFDD and sensor deployment optimization methods are applied to the Experimental Breeder Reactor II (EBR-II) for verification.
- Published
- 2022
- Full Text
- View/download PDF
16. A Deductive Method for Diagnostic Analysis of Digital Instrumentation and Control Systems
- Author
-
Jun Yang, Carol Smidts, and Tunc Aldemir
- Subjects
021110 strategic, defence & security studies ,Markov chain ,business.industry ,Computer science ,0211 other engineering and technologies ,Process (computing) ,02 engineering and technology ,Fault injection ,Reliability engineering ,Software ,Reliability (semiconductor) ,Safety assurance ,Control system ,State (computer science) ,Electrical and Electronic Engineering ,Safety, Risk, Reliability and Quality ,business ,021106 design practice & management - Abstract
Reliability and safety assurance are of supreme importance in the implementation of digital safety-critical control systems. A deductive method integrated with simulation-based fault injection and testing is presented for out-of-range permanent software fault localization. For fault modeling, an input–output mapping scheme is proposed to characterize the behavior of software modules and represent failure modes in an analogous manner to hardware state definitions. The Markov/cell-to-cell-mapping scheme is used for diagnostics. The diagnostic process is illustrated by several case studies for a boiling water reactor feedwater control system. The case study results show that the diagnostic algorithm is capable of software fault localization in the presence of both single and multiple faults.
- Published
- 2018
- Full Text
- View/download PDF
17. Bridging the simulator gap: Measuring motivational bias in digital nuclear power plant environments
- Author
-
Carol Smidts and Rachel Benish Shirley
- Subjects
Measure (data warehouse) ,021103 operations research ,Computer science ,0211 other engineering and technologies ,Crew ,02 engineering and technology ,01 natural sciences ,Control room ,Industrial and Manufacturing Engineering ,Structural equation modeling ,010104 statistics & probability ,Operator (computer programming) ,0101 mathematics ,Safety, Risk, Reliability and Quality ,Set (psychology) ,Simulation ,Causal model ,Human reliability - Abstract
Using digital NPP simulator facilities for research in Human Reliability Analysis (HRA) and nuclear power research is increasingly popular. We propose a method for characterizing and quantifying the gap between data collected in a simulator and data that reflect NPP operations. Using novice operators, we demonstrate how to manipulate and measure the impact of the simulator environment on operator actions. A set of biases are proposed to characterize factors introduced by the simulator environment. There are two categories of simulator bias: Environmental Biases (physical differences between the simulator and the control room), and Motivational Biases (cognitive differences between training in a simulator and operating a NPP). This study examines Motivational Bias. A preliminary causal model of Motivational Biases is introduced and tested in a demonstration experiment using 30 student operators. Data from 41 simulator sessions are analyzed. Data include crew characteristics, operator surveys, and time to recognize and diagnose the accident in the scenario. Quantitative models of the Motivational Biases using Structural Equation Modeling (SEM) are proposed. With these models, we estimate how the effects of the scenario conditions are mediated by simulator bias, and we demonstrate how to quantify the strength of these effects.
- Published
- 2018
- Full Text
- View/download PDF
18. Fault Propagation and Effects Analysis for Designing an Online Monitoring System for the Secondary Loop of the Nuclear Power Plant Portion of a Hybrid Energy System
- Author
-
Mike Pietrykowski, Zhuoer Wang, Xiaoxu Diao, Yunfei Zhao, Carol Smidts, and Shannon Bragg-Sitton
- Subjects
Nuclear and High Energy Physics ,Computer science ,020209 energy ,Secondary loop ,Hybrid energy ,Monitoring system ,02 engineering and technology ,Condensed Matter Physics ,law.invention ,Fault propagation ,Nuclear Energy and Engineering ,Control theory ,law ,020204 information systems ,Nuclear power plant ,0202 electrical engineering, electronic engineering, information engineering - Abstract
This paper studies the propagation and effects of faults in critical components that pertain to the secondary loop of a nuclear power plant found in nuclear hybrid energy systems (NHESs). This info...
- Published
- 2018
- Full Text
- View/download PDF
19. Reliability analysis of passive systems: An overview, status and research expectations
- Author
-
Samuel Abiodun Olatubosun and Carol Smidts
- Subjects
Passive systems ,Nuclear Energy and Engineering ,Computer science ,Energy Engineering and Power Technology ,Safety, Risk, Reliability and Quality ,Waste Management and Disposal ,Reliability (statistics) ,Reliability engineering - Published
- 2022
- Full Text
- View/download PDF
20. Predictive Execution of Parallel Simulations in Hard Real-Time Systems
- Author
-
Michael Pietrykowski and Carol Smidts
- Subjects
Computational Theory and Mathematics ,Hardware and Architecture ,Software ,Theoretical Computer Science - Published
- 2022
- Full Text
- View/download PDF
21. Sequential Bayesian inference of transition rates in the hidden Markov model for multi-state system degradation
- Author
-
Wei Gao, Yunfei Zhao, and Carol Smidts
- Subjects
021110 strategic, defence & security studies ,021103 operations research ,Computer science ,Bayesian probability ,0211 other engineering and technologies ,Process (computing) ,02 engineering and technology ,Bayesian inference ,computer.software_genre ,Industrial and Manufacturing Engineering ,Synthetic data ,Data analysis ,Forward algorithm ,Data mining ,Safety, Risk, Reliability and Quality ,Hidden Markov model ,computer ,Importance sampling - Abstract
The more easily available system performance data and advances in data analytics have provided us with opportunities to optimize maintenance programs for engineered systems, for example nuclear power plants. One key task in maintenance optimization is to obtain an accurate model for system degradation. In this research, we propose a Bayesian method to address this problem. Noting that systems usually exhibit multiple states and that the actual state of a system usually is not directly observable, in the method we first model the system degradation process and the observation process based on a hidden Markov model. Then we develop a sequential Bayesian inference algorithm based on importance sampling and the forward algorithm to infer the posterior distributions of the transition rates in the hidden Markov model based on available observations. The proposed Bayesian method allows us to take advantage of evidence from multiple sources, and also allows us to perform Bayesian inference sequentially, without the need to use the entire history of observations every time new observations are collected. We demonstrate the proposed method using both synthetic data for a nuclear power plant feedwater pump and realistic data for a nuclear power plant chemistry analytical device.
- Published
- 2021
- Full Text
- View/download PDF
22. CMS-BN: A cognitive modeling and simulation environment for human performance assessment, part 2 — Application
- Author
-
Yunfei Zhao and Carol Smidts
- Subjects
Cognitive model ,021110 strategic, defence & security studies ,021103 operations research ,Computer science ,0211 other engineering and technologies ,Bayesian network ,Cognition ,02 engineering and technology ,Human performance technology ,Industrial and Manufacturing Engineering ,Reliability engineering ,Pilot-operated relief valve ,Operator (computer programming) ,Systems design ,Safety, Risk, Reliability and Quality ,Human reliability - Abstract
Human operators play a critical role in the operation of complex engineered systems, in particular under abnormal conditions. It is important to assess human performance under conditions of interest and to improve upon the performance by taking effective measures. In this paper we present the application of a previously developed cognitive modeling and simulation environment to address these two problems. The developed environment simulates how a human operator dynamically interacts with the external system, with focus on the operator’s cognitive activities. Based on the Three Mile Island accident, specifically the stuck-open failure of the pilot operated relief valve, we demonstrate how the developed environment can be used for human reliability analysis and human performance improvement. For human reliability analysis, besides providing a single human error probability value the proposed environment allows us to examine in detail how an operator reached a decision or missed an action. For human performance improvement, the proposed environment provides an easy approach for investigating the effects of changes, for example, in operator training and system design, on human performance and therefore identifying the most effective changes to improve human performance.
- Published
- 2021
- Full Text
- View/download PDF
23. CMS-BN: A cognitive modeling and simulation environment for human performance assessment, part 1 — methodology
- Author
-
Yunfei Zhao and Carol Smidts
- Subjects
Cognitive model ,021110 strategic, defence & security studies ,021103 operations research ,Computer science ,Mechanism (biology) ,business.industry ,media_common.quotation_subject ,Monte Carlo method ,0211 other engineering and technologies ,Bayesian network ,Cognition ,02 engineering and technology ,Machine learning ,computer.software_genre ,Human performance technology ,Industrial and Manufacturing Engineering ,Perception ,Artificial intelligence ,Safety, Risk, Reliability and Quality ,business ,computer ,media_common ,Human reliability - Abstract
Cognitive modeling and simulation studies how a human dynamically interacts with the external world. Human performance assessment based on this concept has long been researched in both cognitive sciences and engineering disciplines. However, existing methods have difficulties in describing the uncertain relationships in a human’s knowledge and in considering the uncertainties in the cognitive process. To tackle these issues, we propose a novel cognitive modeling and simulation environment (CMS-BN) by introducing Bayesian networks to represent a human’s knowledge and Monte Carlo simulation to account for the uncertainties in the cognitive process. The proposed environment explicitly models information perception, reasoning and response in a human’s cognitive process. Information perception works as a filtering mechanism to downselect signals from the external world. Reasoning and response are modeled as traversing the human knowledge base represented as a Bayesian network to retrieve knowledge and updating human belief and attention distribution accordingly. Uncertainties in the cognitive process are characterized through Monte Carlo simulation. The proposed environment also models the interplay between the cognitive process and two performance shaping factors, stress and fatigue, though additional factors can be further considered. We expect the proposed environment to be useful in human reliability analysis and human performance improvement.
- Published
- 2021
- Full Text
- View/download PDF
24. Basis for non-propagation domains, their transformations and their impact on software reliability
- Author
-
Chetan Mutha and Carol Smidts
- Subjects
Mechanical system ,Robustness (computer science) ,Computer science ,Software design ,Fault tolerance ,Software system ,Safety, Risk, Reliability and Quality ,Algorithm ,Software quality ,Fault detection and isolation ,Interval arithmetic - Abstract
Fault propagation analysis is an important step in determining system reliability and defining fault tolerance strategies. Typically, in the early software design phases, the propagation probability for a fault is assumed to be one. However, the assumption that faults will always propagate highly underestimates reliability, and valuable resources may be wasted on fixing faults that may never propagate. To determine the fault propagation probability, a concept of flat parts is introduced. A flat part is a property of a function; when multiple functions containing flat parts interact with each other, these flat parts undergo a transformation. During this transformation, the flat parts may be killed, preserved, or new flat parts may be generated. Interval arithmetic-based rules to determine such flat part transformations are introduced. A flat part-based propagation analysis can be used to determine the reliability of a software system, or software-driven mechanical system expressed functionally. In addition, the information obtained through flat part-based propagation analysis can be used to add sensors within the flat parts to increase the probability of fault detection, thus increasing the robustness of the system under study.
- Published
- 2017
- Full Text
- View/download PDF
25. Human Factors Correlation Analysis for Nuclear Micro-Reactors Based on Literature Review and Evidence Theory
- Author
-
Patrick Tosh, Ronald L. Boring, Carol Smidts, and Yunfei Zhao
- Subjects
Variables ,Situation awareness ,business.industry ,Computer science ,media_common.quotation_subject ,Aggregate (data warehouse) ,Nuclear power ,Automation ,Variety (cybernetics) ,Econometrics ,Systems design ,business ,Human reliability ,media_common - Abstract
Nuclear micro-reactors have drawn increasing attention in the nuclear industry since they can be potentially applied for a variety of purposes such as desalination and hydrogen production. Nuclear micro-reactors have unique characteristics (e.g. wider use of automation) compared with currently operating large nuclear power plants. These characteristics inevitably influence operator performance. To perform human reliability analysis and system design optimization, it is necessary to have a thorough investigation of such effects. To this end, in this paper we propose a method based on literature review and evidence theory to analyze human factor correlations for nuclear micro-reactors. A literature review is performed to identify studies related to characteristic human factors for micro-reactors. We divide human factors into independent variables that are used to characterize microreactors (e.g., level of automation) and dependent variables (e.g., situation awareness) that are influenced by the independent variables. The correlations between the independent and dependent variables are extracted from the reviewed studies. We should note that there may be conflicts between the correlations extracted from different studies. To address this problem, we use evidence theory to aggregate the potentially conflicting information to draw final conclusions about the correlations. The results obtained in this study demonstrate the high complexity of the relations between independent and dependent variables, including the uncertainties in the relations, the non-monotonic relations, and the inconsistent effects of one independent variable on different dependent variables. The potential applications of the results in human reliability analysis for nuclear micro-reactors and system design optimization are discussed.
- Published
- 2020
- Full Text
- View/download PDF
26. Effects Assessment for Requirements Faults of Safety Critical Software in Nuclear Industry
- Author
-
Boyuan Li and Carol Smidts
- Subjects
Hazard (logic) ,021110 strategic, defence & security studies ,business.industry ,Computer science ,Programming complexity ,0211 other engineering and technologies ,020207 software engineering ,Context (language use) ,02 engineering and technology ,Fault (power engineering) ,Reactor protection system ,Reliability engineering ,Software ,0202 electrical engineering, electronic engineering, information engineering ,Probability distribution ,Instrumentation (computer programming) ,business - Abstract
In a context where software has been pervasive in safety critical applications, trust in software safety is challenged by software complexity and lack of systematic methods to assess the effects of remaining faults. To expand the use of digital technology in the nuclear industry, systematic methods are required to assess the effects of remaining faults for software-based Instrumentation & Control (I& C) systems in safety critical applications. In this paper, the effects of the remaining requirements faults are assessed using a probability density function (PDF) of their hazard rates. A hazard-based effect analysis (HEA) method is developed to obtain the probability distribution of the hazard rates of a remaining requirements fault. The HEA method is applied to a Reactor Protection System (RPS) in the case study. The probability density functions for the introduced faults, detected faults and remaining faults in the requirements phase of the RPS system on the domain of hazard degree are obtained.
- Published
- 2020
- Full Text
- View/download PDF
27. Support for Reactor Operators in Case of Cyber-Security Threats (NEUP Final Report)
- Author
-
Timothy R. McJunkin, Quanyan Zhu, Carol Smidts, Indrajit Ray, and Timothy Mabry
- Subjects
Computer science ,Computer security ,computer.software_genre ,computer - Published
- 2019
- Full Text
- View/download PDF
28. An expert-based method for the risk analysis of functional failures in the fracturing system of unconventional natural gas
- Author
-
Guoan Yang, Feng Chen, Qianlin Wang, Xiaoxu Diao, Carol Smidts, and Yunfei Zhao
- Subjects
Computer science ,020209 energy ,Fuzzy set ,System safety ,02 engineering and technology ,computer.software_genre ,Defuzzification ,Fuzzy logic ,Industrial and Manufacturing Engineering ,020401 chemical engineering ,Component (UML) ,0202 electrical engineering, electronic engineering, information engineering ,0204 chemical engineering ,Electrical and Electronic Engineering ,Inference engine ,Civil and Structural Engineering ,business.industry ,Mechanical Engineering ,Building and Construction ,Pollution ,Expert system ,Reliability engineering ,General Energy ,Knowledge base ,business ,computer - Abstract
This paper proposes an expert-based method to analyze functional failure risk in the fracturing system of unconventional natural gas. The newly proposed Fuzzy-FFIP method integrates functional failure identification and propagation (FFIP) with fuzzy logic. Fuzzy-FFIP is sufficient for dealing with the complexity and correlation of functional failures and the lack of failure data, as well as capturing the fuzziness of functional states of a critical component and entire system. From a configuration-behavior-function aspect, FFIP is a common framework used to represent system-wide logical relationships and study functional failure propagation paths. This framework is an expert system where knowledge base and inference engine are based on behavioral rules (BRs) and functional failure logic (FFL), respectively. Using FFL as a support, fuzzy logic is specifically applied to analyze the fuzzification and defuzzification of functional states – operating, degraded, and lost. The proposed method can clearly provide functional failure modes, effectively reveal failure propagation paths, and quantitatively assess functional failure risk levels in the fracturing system. To illustrate its validity, an on-site pump lubricant subsystem of fracturing unit is selected as a test case. Results show that Fuzzy-FFIP is more detailed and accurate, and contributes to system safety during the whole fracturing period.
- Published
- 2021
- Full Text
- View/download PDF
29. Metric-based software reliability prediction approach and its application
- Author
-
Carol Smidts, Ying Shi, Steven A. Arndt, and Ming Li
- Subjects
021103 operations research ,business.industry ,Computer science ,0211 other engineering and technologies ,Software development ,020207 software engineering ,02 engineering and technology ,Software quality ,Software metric ,Reliability engineering ,Software sizing ,Software construction ,0202 electrical engineering, electronic engineering, information engineering ,Avionics software ,Software reliability testing ,Software verification and validation ,business ,Software - Abstract
This paper proposes a software reliability prediction approach based on software metrics. Metrics measurement results are connected to quantitative reliability predictions through defect information and consideration of the operational environments. An application of the proposed approach to a safety critical software deployed in a nuclear power plant is discussed. Results show that the proposed prediction approach could be applied using a variety of software metrics at different stages of the software development life cycle and could be used as an indicator of software quality. Therefore the approach could also guide the development process and help make design decisions. Experiences and lessons learned from the application are also discussed.
- Published
- 2016
- Full Text
- View/download PDF
30. Component detection in piping and instrumentation diagrams of nuclear power plants based on neural networks
- Author
-
Carol Smidts, Yunfei Zhao, and Wei Gao
- Subjects
Artificial neural network ,business.industry ,Computer science ,020209 energy ,Deep learning ,Energy Engineering and Power Technology ,02 engineering and technology ,010501 environmental sciences ,computer.software_genre ,Data structure ,01 natural sciences ,Convolutional neural network ,Object detection ,Nuclear Energy and Engineering ,Component (UML) ,0202 electrical engineering, electronic engineering, information engineering ,Feature (machine learning) ,Data mining ,Instrumentation (computer programming) ,Artificial intelligence ,Safety, Risk, Reliability and Quality ,business ,Waste Management and Disposal ,computer ,0105 earth and related environmental sciences - Abstract
Piping and Instrumentation Diagrams (P&IDs) are the most commonly used engineering drawings to describe components and their relationships, and they are one of the most important inputs for data analysis in Nuclear Power Plants (NPP). In traditional analysis, the information related to the components is extracted manually from the P&IDs. This usually takes large amounts of effort and is error-prone. With the rapid development in the area of computer vision and deep learning, automatically detecting components and their relationships becomes possible. In this paper, we aim to use the latest neural network models to automatically extract information on components and their identifications, from the P&IDs in NPPs. We use a Faster Regional Convolutional Neural Networks (Faster RCNN) architecture called ResNet-50, to detect the components in the P&IDs. Compared to common object detection, object detection for P&IDs poses unique challenges to these methods. For example, the P&IDs symbols are much smaller than the background, and detecting such small objects remains a challenging task for modern neural networks. To address these challenges, we 1) propose several techniques for data augmentation that effectively solve the problem of training data shortage, and 2) propose a feature grouping strategy for detecting components with distinct features. Besides, we introduce a SegLink model for text detection, which can automatically extract components’ identifications from P&IDs. We also develop a method for building a data structure to reflect the relationships between components (e.g., to which pipe a component is connected, or what are the downstream or upstream components of one specific component) based on the extracted information. This data structure can be further used for plant safety analysis, and operation and maintenance cost optimization. Sensitivity analysis and comparison with other Convolutional Neural Networks (CNNs) are performed. The results of these analyses are also discussed in this paper. This analysis framework has been tested on the P&IDs from a commercial NPP. The Average Precision for components, which is used to measure the performance of the proposed method, is about 98%. The success rates of component-text mapping and component-pipe mapping are 270/275 and 319/319, respectively. It is worth noting that this framework is generic and can also be applied to P&IDs of non-nuclear industries.
- Published
- 2020
- Full Text
- View/download PDF
31. Dynamic bayesian networks based abnormal event classifier for nuclear power plants in case of cyber security threats
- Author
-
Diptendu Mohan Kar, Michael C. Pietrykowski, Indrajit Ray, Yunfei Zhao, Xiaoxu Diao, Carol Smidts, Pavan Kumar Vaddi, and Timothy Mabry
- Subjects
Computer science ,020209 energy ,Probabilistic logic ,Programmable logic controller ,Energy Engineering and Power Technology ,Conditional probability ,02 engineering and technology ,Industrial control system ,010501 environmental sciences ,computer.software_genre ,01 natural sciences ,Unobservable ,law.invention ,Nuclear Energy and Engineering ,law ,Nuclear power plant ,0202 electrical engineering, electronic engineering, information engineering ,Data mining ,Safety, Risk, Reliability and Quality ,Waste Management and Disposal ,Classifier (UML) ,computer ,Dynamic Bayesian network ,0105 earth and related environmental sciences - Abstract
With increased adoption of digital systems for instrumentation and control, nuclear power plants have become more vulnerable to cyber-attacks. Such attacks can have very serious implications on a plant's operation, especially if they masquerade as safety events. Thus, it is important that research be focused towards distinguishing cyber-attacks from fault induced safety events for a correct response in a timely manner. In this paper, an event classifier is presented to classify abnormal events in nuclear power plants as either fault induced safety events or cyber-attacks. The process of inferring the hidden (unobservable) state of a system (normal or faulty) through observable physical sensor measurements has been a long-standing industry practice. There has been a recent surge of literature discussing the use of network traffic data for the detection of cyber-attacks on industrial control systems. In the classifier we present, both physical and network behaviors of a nuclear power plant during abnormal events (safety events or cyber-attacks) are used to infer the probabilities of the states of the plant. The nature of the abnormal event in question is then determined based on these probabilities. The Dynamic Bayesian Networks (DBNs) methodology is used for this purpose since it is an appropriate framework for inferring hidden states of a system from observed variables through probabilistic reasoning. In this paper we introduce a DBN based abnormal event classifier and an architecture to implement this classifier as part of a monitoring system. An experimental environment is setup with a two-tank system in conjunction with a nuclear power plant simulator and a programmable logic controller. A set of 27 cyber-attacks and 14 safety events were systematically designed for the experiment. A set of 6 cyber-attacks and 2 safety events were used to manually finetune the Conditional Probability Tables (CPTs) of the 2-timeslice dynamic Bayesian network (2T-DBN). Out of the remaining 33 events, the nature of the abnormal event was successfully identified in all the 33 cases and the location of the cyber-attack or fault was successfully determined in 32 cases. The case-study demonstrates the applicability of the methodology developed. Further research should examine the practicality of implementing the proposed monitoring system on a real-world system and issues associated with cost optimization.
- Published
- 2020
- Full Text
- View/download PDF
32. Finite-horizon semi-Markov game for time-sensitive attack response and probabilistic risk assessment in nuclear power plants
- Author
-
Linan Huang, Quanyan Zhu, Yunfei Zhao, and Carol Smidts
- Subjects
021110 strategic, defence & security studies ,Mathematical optimization ,021103 operations research ,Probabilistic risk assessment ,Computer science ,0211 other engineering and technologies ,02 engineering and technology ,Industrial and Manufacturing Engineering ,law.invention ,Dynamic programming ,symbols.namesake ,law ,Nash equilibrium ,Nuclear power plant ,symbols ,State space ,Safety, Risk, Reliability and Quality ,Solution concept ,Risk assessment ,Game theory - Abstract
Cybersecurity has drawn increasing attention in the nuclear industry. To improve the cyber-security posture, it is important to develop effective methods for cyber-attack response and cyber-security risk assessment. In this research, we develop a finite-horizon semi-Markov general-sum game between the defender (i.e., plant operator) and the attacker to obtain the time-sensitive attack response strategy and the real-time risk assessment in nuclear power plants. We propose methods for identifying system states of concern to reduce the state space and for determining state transition probabilities by integrating probabilistic risk assessment techniques. After a proper discretization of the developed continuous-time model, we use dynamic programming to derive the time-varying and state-dependent strategy of the defender based on the solution concept of the mixed-strategy Nash equilibrium. For risk assessment, three risk metrics are considered, and an exact analytical algorithm and a Monte Carlo simulation-based algorithm for obtaining the metrics are developed. Both players’ strategies and the risk metrics are illustrated using a digital feedwater control system used in pressurized water reactors. The results show that the proposed method can support plant operators in timely cyber-attack response and effective risk assessment, reduce the risk, and improve the resilience of nuclear power plants to malicious cyber-attacks.
- Published
- 2020
- Full Text
- View/download PDF
33. A control-theoretic approach to detecting and distinguishing replay attacks from other anomalies in nuclear power plants
- Author
-
Carol Smidts and Yunfei Zhao
- Subjects
Exploit ,Computer science ,business.industry ,020209 energy ,Energy Engineering and Power Technology ,Watermark ,02 engineering and technology ,010501 environmental sciences ,Nuclear power ,Computer security ,computer.software_genre ,01 natural sciences ,Nuclear Energy and Engineering ,Random noise ,0202 electrical engineering, electronic engineering, information engineering ,Safety, Risk, Reliability and Quality ,business ,Null hypothesis ,Waste Management and Disposal ,Replay attack ,computer ,0105 earth and related environmental sciences - Abstract
The wider use of digital systems in nuclear power plants has raised security concerns to the utilities, the regulatory entities, and the public. These digital systems provide malicious attackers with opportunities to exploit vulnerabilities in these systems to realize physical damages. Replay attacks are a particular type of attacks. They are easy to perform and when combined with other types of attacks can potentially lead to significant consequences. Therefore, timely detection of a replay attack is of vital importance for the operator to take mitigation measures. Besides, it is also important to distinguish between a replay attack and other anomalies, since they are of different nature (one intentional and the other random) and will require different responses by the operator. In this research, we propose a control-theoretic method to address this problem. The method consists of the injection of random noise (i.e. physical watermark) into the control input and two chi-squared tests. The physical watermark is used to excite the system being controlled so that the replay attack, if it exists, can be uncovered. The first chi-squared test is used to detect anomalies in the system, including replay attacks and other anomalies. If the null hypothesis in the first test is rejected, we use the second chi-squared test to determine whether the anomaly is a replay attack or any other anomaly. We demonstrate the proposed method by using a steam generator that can be found in pressurized water reactors. The results of different cases are presented and discussed.
- Published
- 2020
- Full Text
- View/download PDF
34. Next-Generation Architecture and Autonomous Cyber-Defense
- Author
-
Pavan Kumar Vaddi, Carol Smidts, and Xiaoxu Diao
- Subjects
Network architecture ,Cyber defense ,Computer science ,Systems engineering ,Industrial control system ,Architecture - Abstract
This chapter introduces the motivation for and emerging developments in next-generation network architectures to enable autonomous cyber-defense (ACD), including promising studies on cyber-defense approaches and mechanisms applied to contemporary industrial control systems (ICSs).
- Published
- 2019
- Full Text
- View/download PDF
35. Development of a quantitative Bayesian network mapping objective factors to subjective performance shaping factor evaluations: An example using student operators in a digital nuclear power plant simulator
- Author
-
Rachel Benish Shirley, Carol Smidts, and Yunfei Zhao
- Subjects
021110 strategic, defence & security studies ,021103 operations research ,Computer science ,Bayesian probability ,0211 other engineering and technologies ,Bayesian network ,Context (language use) ,02 engineering and technology ,Variation (game tree) ,Bayesian inference ,Industrial and Manufacturing Engineering ,law.invention ,law ,Nuclear power plant ,Safety, Risk, Reliability and Quality ,Baseline (configuration management) ,Simulation ,Human reliability - Abstract
Traditional human reliability analysis methods consist of two main steps: assigning values for performance shaping factors (PSFs), and assessing human error probability (HEP) based on PSF values. Both steps rely on expert judgment. Considerable advances have been made in reducing reliance on expert judgment for HEP assessment by incorporating human performance data from various sources (e.g., simulator experiments); however, little has been done to reduce reliance on expert judgment for PSF assignment. This paper introduces a data-driven approach for assessing PSFs in Nuclear Power Plants (NPPs) based on contextual information. The research illustrates how to develop a Bayesian PSF network using data collected from student operators in a NPP simulator. The approach starts with a baseline PSF model that calculates PSF values from context information during an accident scenario. Then, a Bayesian model is developed to link the baseline model to the Subjective PSFs. Two additional factors are included: simulator bias and context information. Results and analysis include variation between the results of the proposed model and the training dataset, and the significance of each element in the model. The proposed approach reduces the reliance of PSF assignment on expert judgment and is particularly suitable for dynamic human reliability analysis.
- Published
- 2020
- Full Text
- View/download PDF
36. Human Reliability as a Science—A Divergence on Models
- Author
-
Carol Smidts
- Subjects
Set (abstract data type) ,Data collection ,Computer science ,State (computer science) ,Divergence (statistics) ,Data science ,Human reliability - Abstract
Human reliability analysis is a discipline that focuses on understanding and assessing human behavior during its interactions with complex engineered systems. Central to the discipline are human reliability models and data collection efforts. This paper briefly reviews the state of the art in human reliability analysis and evaluates it against a set of criteria that can be established when it is viewed as a science.
- Published
- 2018
- Full Text
- View/download PDF
37. Automated Software Testing
- Author
-
Boyuan Li, Xiaoxu Diao, Manuel Rodriguez, and Carol Smidts
- Subjects
021103 operations research ,Optimization algorithm ,Modeling language ,business.industry ,Computer science ,0211 other engineering and technologies ,020207 software engineering ,02 engineering and technology ,Systems modeling ,Software testing ,Test execution ,0202 electrical engineering, electronic engineering, information engineering ,Software engineering ,business - Published
- 2018
- Full Text
- View/download PDF
38. A computational cognitive modeling approach to human performance assessment in nuclear power plants
- Author
-
Carol Smidts and Yunfei Zhao
- Subjects
Cognitive model ,Computer science ,business.industry ,Systems engineering ,Nuclear power ,business - Published
- 2018
- Full Text
- View/download PDF
39. An automated software reliability prediction system for safety critical software
- Author
-
Xiang Li, Carol Smidts, and Chetan Mutha
- Subjects
021103 operations research ,business.industry ,Computer science ,0211 other engineering and technologies ,020207 software engineering ,Usability ,02 engineering and technology ,Software quality ,Reliability engineering ,Software ,Software sizing ,Software construction ,0202 electrical engineering, electronic engineering, information engineering ,Software reliability testing ,Software verification and validation ,business ,Reliability (statistics) - Abstract
Software reliability is one of the most important software quality indicators. It is concerned with the probability that the software can execute without any unintended behavior in a given environment. In previous research we developed the Reliability Prediction System (RePS) methodology to predict the reliability of safety critical software such as those used in the nuclear industry. A RePS methodology relates the software engineering measures to software reliability using various models, and it was found that RePS's using Extended Finite State Machine (EFSM) models and fault data collected through various software engineering measures possess the most satisfying prediction capability. In this research the EFSM-based RePS methodology is improved and implemented into a tool called Automated Reliability Prediction System (ARPS). The features of the ARPS tool are introduced with a simple case study. An experiment using human subjects was also conducted to evaluate the usability of the tool, and the results demonstrate that the ARPS tool can indeed help the analyst apply the EFSM-based RePS methodology with less number of errors and lower error criticality.
- Published
- 2015
- Full Text
- View/download PDF
40. Three suggestions on the definition of terms for the safety and reliability analysis of digital systems
- Author
-
Carol Smidts and Man Cheol Kim
- Subjects
Vocabulary ,Task group ,Probabilistic risk assessment ,Computer science ,media_common.quotation_subject ,Control (management) ,Data science ,Industrial and Manufacturing Engineering ,Terminology ,Reliability engineering ,Safety, Risk, Reliability and Quality ,Set (psychology) ,Reliability (statistics) ,media_common ,Dependency (project management) - Abstract
As digital instrumentation and control systems are being progressively introduced into nuclear power plants, a growing number of related technical issues are coming to light needing to be resolved. As a result, an understanding of relevant terms and basic concepts becomes increasingly important. Under the framework of the OECD/NEA WGRISK DIGREL Task Group, the authors were involved in reviewing definitions of terms forming the supporting vocabulary for addressing issues related to the safety and reliability analysis of digital instrumentation and control (SRA of DI&C). These definitions were extracted from various standards regulating the disciplines that form the technical and scientific basis of SRA DI&C. The authors discovered that different definitions are provided by different standards within a common discipline and used differently across various disciplines. This paper raises the concern that a common understanding of terms and basic concepts has not yet been established to address the very specific technical issues facing SRA DI&C. Based on the lessons learned from the review of the definitions of interest and the analysis of dependency relationships existing between these definitions, this paper establishes a set of recommendations for the development of a consistent terminology for SRA DI&C.
- Published
- 2015
- Full Text
- View/download PDF
41. Validating THERP: Assessing the scope of a full-scale validation of the Technique for Human Error Rate Prediction
- Author
-
Meng Li, Carol Smidts, Atul Gupta, and Rachel Benish Shirley
- Subjects
Mean squared error ,Nuclear Energy and Engineering ,Sample size determination ,Computer science ,Consistency (statistics) ,Statistics ,Human error ,Word error rate ,Absolute probability judgement ,Human error assessment and reduction technique ,Human reliability - Abstract
Science-based Human Reliability Analysis (HRA) seeks to experimentally validate HRA methods in simulator studies. Emphasis is on validating the internal components of the HRA method, rather than the validity and consistency of the final results of the method. In this paper, we assess the requirements for a simulator study validation of the Technique for Human Error Rate Prediction (THERP), a foundational HRA method. The aspects requiring validation include the tables of Human Error Probabilities (HEPs), the treatment of stress, and the treatment of dependence between tasks. We estimate the sample size, n , required to obtain statistically significant error rates for validating HEP values, and the number of observations, m , that constitute one observed error rate for each HEP value. We develop two methods for estimating the mean error rate using few observations. The first method uses the median error rate, and the second method is a Bayesian estimator of the error rate based on the observed errors and the number of observations. Both methods are tested using computer-generated data. We also conduct a pilot experiment in The Ohio State University’s Nuclear Power Plant Simulator Facility. Student operators perform a maintenance task in a BWR simulator. Errors are recorded, and error rates are compared to the THERP-predicted error rates. While the observed error rates are generally consistent with the THERP HEPs, further study is needed to provide confidence in these results as the pilot study sample size is small. Sample size calculations indicate that a full-scope THERP validation study would be a substantial but potentially feasible undertaking; 40 h of observation would provide sufficient data for a preliminary study, and observing 101 operators for 20 h each would provide data for a full validation experiment.
- Published
- 2015
- Full Text
- View/download PDF
42. A Method for Merging Experts' Cause-Effect Knowledge in Software Dependability
- Author
-
Fuqun Huang, Carol Smidts, Boyuan Li, and Ted Quinn
- Subjects
021110 strategic, defence & security studies ,021103 operations research ,Cause effect ,Computer science ,business.industry ,Knowledge engineering ,0211 other engineering and technologies ,Maintainability ,02 engineering and technology ,Software quality ,Software development process ,Software ,Dependability ,Software engineering ,business ,Merge (version control) - Abstract
Software dependability engineering is an interdisciplinary area built on numerous concepts with complex interrelations. Understanding the relations between various concepts and integrating such conceptual knowledge is fundamental for educational, research and industrial purposes. This paper proposes a method to merge experts' conceptual knowledge represented in the form of Causal Mechanism Graphs (CMG). A case study was conducted to apply the method on 14 causal mechanism graphs obtained from 11 domain experts. The obtained consensus knowledge was then validated and evolved based on another 24 experts' opinions. The results demonstrate the main causal mechanisms that influence software dependability attributes, i.e. software aspect of reliability, safety, security, availability and maintainability. The application shows that the CMG merging method has the advantage of explicitly aggregating complex causal knowledge without losing information on the original knowledge structure. The application also shows that the CMG merging method is capable to integrate various factors that influence multiple dependability attributes at different stages of the software development lifecycle.
- Published
- 2018
- Full Text
- View/download PDF
43. A Quantification Framework for Software Safety in the Requirements Phase: Application to Nuclear Power Plants
- Author
-
Boyuan Li, Ted Quinn, Fuqun Huang, and Carol Smidts
- Subjects
Computer science ,business.industry ,Digital instrumentation ,media_common.quotation_subject ,System safety ,Nuclear power ,Reactor protection system ,Phase (combat) ,Reliability engineering ,Software ,Nuclear industry ,Function (engineering) ,business ,media_common - Abstract
With the increasing dependence on digital instrumentation and control (I&C) systems in nuclear power plants, software has become a significant determinant of system safety assurance. To expand the use of digital technology in the nuclear industry, systematic methods are required for quantifying the safety of software-based I&C systems in safety critical applications. A software safety quantification model limited to the requirements phase is built in this paper based on the causal mechanisms that challenge safety. A preliminary mathematical method was developed to assess the number of requirements faults and their sub-types. A case study is conducted on a function of a reactor protection system to verify the validity of the quantification model.
- Published
- 2018
- Full Text
- View/download PDF
44. A Dynamic Mechanistic Model of Human Response Proposed for Human Reliability Analysis
- Author
-
Yunfei Zhao and Carol Smidts
- Subjects
Operator (computer programming) ,Knowledge base ,Process (engineering) ,business.industry ,Computer science ,Perception ,media_common.quotation_subject ,Artificial intelligence ,business ,Semantic network ,Human reliability ,media_common - Abstract
A dynamic mechanistic model of human response for human reliability analysis is proposed in this paper. The model is comprised of three main components: information perception, knowledge base, and human reasoning and decision making. The activation based approach, which considers both stimulus-driven and goal-directed activations, is adopted model the impact of information perception on the reasoning process. An operator’s knowledge base is represented by a semantic network in the model. Activation propagation theory is applied to model human reasoning and decision making. Illustration of activation propagation through the relief-valve-stuck-open incident in the Three Mile Island accident demonstrates the feasibility of this approach. Additionally, the influences of two significant performance shaping factors, stress and fatigue, are integrated into the model.
- Published
- 2017
- Full Text
- View/download PDF
45. A Systematic Method to Build a Knowledge Base to be Used in a Human Reliability Analysis Model
- Author
-
Carol Smidts, Yunfei Zhao, Keith Altman, Muhammad Nevin Anandika, and Kreteeka Chaudhury
- Subjects
Knowledge representation and reasoning ,Computer science ,business.industry ,computer.software_genre ,Automatic summarization ,law.invention ,Systematic review ,Knowledge base ,law ,Nuclear power plant ,Data mining ,business ,computer ,Human reliability ,Coding (social sciences) - Abstract
A human’s knowledge base is a key component for the development of a mechanistic model of human response to be used for human reliability analysis. This paper proposes a new method for constructing this knowledge base. The proposed method is comprised of three steps: (1) systematic literature review, which is used to collect data pertinent to the subject under study; (2) summarization, the goal of which is to extract key points that are expressed in the literature; (3) qualitative coding, a process in which codes closely related to the topic are derived and the relationships between these codes are expressed. As a case study, the proposed method is being applied to construct an operator’s knowledge base concerning severe accident phenomenology in a nuclear power plant. Part of this application is explored in this paper. With the proposed method and the resulting knowledge base, it is expected that an individual’s response when presented with a specific context can be modeled in more detail.
- Published
- 2017
- Full Text
- View/download PDF
46. Foreword: Special section on Big Data for Nuclear Power Plants
- Author
-
Marat Khafizov and Carol Smidts
- Subjects
Nuclear and High Energy Physics ,Engineering ,Nuclear Energy and Engineering ,business.industry ,Big data ,Special section ,Nuclear power ,Condensed Matter Physics ,business ,Telecommunications - Published
- 2019
- Full Text
- View/download PDF
47. Software testing with an operational profile
- Author
-
Chetan Mutha, Matthew J. Gerber, Carol Smidts, and Manuel Rodriguez
- Subjects
General Computer Science ,Standardization ,business.industry ,Computer science ,computer.software_genre ,Field (computer science) ,Software quality ,Theoretical Computer Science ,Test case ,Software ,Open research ,Taxonomy (general) ,Data mining ,Dimension (data warehouse) ,business ,computer - Abstract
This article is devoted to the survey, analysis, and classification of operational profiles (OP) that characterize the type and frequency of software inputs and are used in software testing techniques. The survey follows a mixed method based on systematic maps and qualitative analysis. This article is articulated around a main dimension, that is, OP classes, which are a characterization of the OP model and the basis for generating test cases. The classes are organized as a taxonomy composed of common OP features (e.g., profiles, structure, and scenarios), software boundaries (which define the scope of the OP), OP dependencies (such as those of the code or in the field of interest), and OP development (which specifies when and how an OP is developed). To facilitate understanding of the relationships between OP classes and their elements, a meta-model was developed that can be used to support OP standardization. Many open research questions related to OP definition and development are identified based on the survey and classification.
- Published
- 2014
- Full Text
- View/download PDF
48. An integrated multidomain functional failure and propagation analysis approach for safe system design
- Author
-
Irem Y. Tumer, Carol Smidts, Chetan Mutha, and David C. Jensen
- Subjects
Decision support system ,Integrated design ,Engineering ,Iterative design ,business.industry ,Process (computing) ,Physical system ,Fault (power engineering) ,Industrial and Manufacturing Engineering ,Reliability engineering ,Software ,Artificial Intelligence ,Systems design ,business - Abstract
Early system design analysis and fault removal is an important step in the iterative design process to avoid costly repairs in the later stages of system development. System complexity is increasing with increased use of software to control the physical system. There is a dearth of techniques to evaluate inconsistencies, incompatibility, and fault proneness of the system design in an integrated manner. The early design analysis technique presented in this paper aids a designer to understand the interplay between the multifaceted components and evaluate his/her design in an integrated manner. The technique allows simultaneous propagation of different types of faults from various domains and evaluates their functional impact over a period of time. The structure of the technique is explained using domain-specific conceptual metamodels, whereas the execution is based on the event sequence diagram, which is one of the established reliability and safety analysis techniques. One of the notable features of the proposed technique is the object-oriented nature of the system design representation. The technique is demonstrated with the help of a case study, and the execution results of two scenarios are evaluated to demonstrate the analysis capability of the proposed technique.
- Published
- 2013
- Full Text
- View/download PDF
49. Human reliability modeling for the Next Generation System Code
- Author
-
Carol Smidts and R. Sundaramurthi
- Subjects
Structure (mathematical logic) ,Operator (computer programming) ,Nuclear Energy and Engineering ,Probabilistic risk assessment ,Operations research ,Process (engineering) ,Computer science ,Human error ,Bayesian network ,Conditional probability ,Reliability engineering ,Human reliability - Abstract
This paper derives the human reliability model requirements for the Next Generation System Code which will be utilized to determine risk-informed safety margins for nuclear power plants through dynamic probabilistic risk analysis. The proposed model is flexible, with the facility to apply a coarse-grain or a fine-grain structure based on the desired resolution level. The varying resolution is achieved by employing human reliability analysis methods with the demonstrated capability of handling human errors that occur during the execution of procedural activities for the coarse-grain structure and the advanced cognitive IDA/IDAC method for the fine-grain structure. The paper proposes improvements to the existing IDA/IDAC model to incorporate functionalities demanded by the NGSC. The improvements are derived for four modules of IDA/IDAC. A Bayesian belief network is constructed for the performance-shaping factors and the conditional probability for existence of each factor is computed from data collected from aviation and nuclear accidents. The influence of the performance-shaping factors on the strategy-selection process of the operator is also depicted. A foundation is laid for the development of mental models with a focus on NPP operation. The research lists the modifications/additions required for the IDA/IDAC method to enable the incorporation of Human Reliability Analysis (HRA) into the Next Generation System Code.
- Published
- 2013
- Full Text
- View/download PDF
50. Final Technical Report on Quantifying Dependability Attributes of Software Based Safety Critical Instrumentation and Control Systems in Nuclear Power Plants
- Author
-
Funqun Huang, Carol Smidts, Boyuan Li, and Xiang Li
- Subjects
Engineering ,business.industry ,Event (computing) ,Software development ,Maintainability ,Dependability ,Expert elicitation ,Software system ,business ,Software engineering ,Software quality ,Reliability engineering ,Causal model - Abstract
With the current transition from analog to digital instrumentation and control systems in nuclear power plants, the number and variety of software-based systems have significantly increased. The sophisticated nature and increasing complexity of software raises trust in these systems as a significant challenge. The trust placed in a software system is typically termed software dependability. Software dependability analysis faces uncommon challenges since software systems’ characteristics differ from those of hardware systems. The lack of systematic science-based methods for quantifying the dependability attributes in software-based instrumentation as well as control systems in safety critical applications has proved itself to be a significant inhibitor to the expanded use of modern digital technology in the nuclear industry. Dependability refers to the ability of a system to deliver a service that can be trusted. Dependability is commonly considered as a general concept that encompasses different attributes, e.g., reliability, safety, security, availability and maintainability. Dependability research has progressed significantly over the last few decades. For example, various assessment models and/or design approaches have been proposed for software reliability, software availability and software maintainability. Advances have also been made to integrate multiple dependability attributes, e.g., integrating security with other dependability attributes, measuring availability and maintainability, modelingmore » reliability and availability, quantifying reliability and security, exploring the dependencies between security and safety and developing integrated analysis models. However, there is still a lack of understanding of the dependencies between various dependability attributes as a whole and of how such dependencies are formed. To address the need for quantification and give a more objective basis to the review process -- therefore reducing regulatory uncertainty -- measures and methods are needed to assess dependability attributes early on, as well as throughout the life-cycle process of software development. In this research, extensive expert opinion elicitation is used to identify the measures and methods for assessing software dependability. Semi-structured questionnaires were designed to elicit expert knowledge. A new notation system, Causal Mechanism Graphing, was developed to extract and represent such knowledge. The Causal Mechanism Graphs were merged, thus, obtaining the consensus knowledge shared by the domain experts. In this report, we focus on how software contributes to dependability. However, software dependability is not discussed separately from the context of systems or socio-technical systems. Specifically, this report focuses on software dependability, reliability, safety, security, availability, and maintainability. Our research was conducted in the sequence of stages found below. Each stage is further examined in its corresponding chapter. Stage 1 (Chapter 2): Elicitation of causal maps describing the dependencies between dependability attributes. These causal maps were constructed using expert opinion elicitation. This chapter describes the expert opinion elicitation process, the questionnaire design, the causal map construction method and the causal maps obtained. Stage 2 (Chapter 3): Elicitation of the causal map describing the occurrence of the event of interest for each dependability attribute. The causal mechanisms for the “event of interest” were extracted for each of the software dependability attributes. The “event of interest” for a dependability attribute is generally considered to be the “attribute failure”, e.g. security failure. The extraction was based on the analysis of expert elicitation results obtained in Stage 1. Stage 3 (Chapter 4): Identification of relevant measurements. Measures for the “events of interest” and their causal mechanisms were obtained from expert opinion elicitation for each of the software dependability attributes. The measures extracted are presented in this chapter. Stage 4 (Chapter 5): Assessment of the coverage of the causal maps via measures. Coverage was assessed to determine whether the measures obtained were sufficient to quantify software dependability, and what measures are further required. Stage 5 (Chapter 6): Identification of “missing” measures and measurement approaches for concepts not covered. New measures, for concepts that had not been covered sufficiently as determined in Stage 4, were identified using supplementary expert opinion elicitation as well as literature reviews. Stage 6 (Chapter 7): Building of a detailed quantification model based on the causal maps and measurements obtained. Ability to derive such a quantification model shows that the causal models and measurements derived from the previous stages (Stage 1 to Stage 5) can form the technical basis for developing dependability quantification models. Scope restrictions have led us to prioritize this demonstration effort. The demonstration was focused on a critical system, i.e. the reactor protection system. For this system, a ranking of the software dependability attributes by nuclear stakeholders was developed. As expected for this application, the stakeholder ranking identified safety as the most critical attribute to be quantified. A safety quantification model limited to the requirements phase of development was built. Two case studies were conducted for verification. A preliminary control gate for software safety for the requirements stage was proposed and applied to the first case study. The control gate allows a cost effective selection of the duration of the requirements phase.« less
- Published
- 2016
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.