77 results on '"non-homogeneous Poisson process (NHPP)"'
Search Results
2. Considering Multiplicative Noise in a Software Reliability Growth Model Using Stochastic Differential Equation Approach
- Author
-
Chaudhary, Kuldeep, Kumar, Vijay, Kumar, Deepansha, Kumar, Pradeep, Pham, Hoang, Series Editor, Kapur, P. K., editor, Singh, Gurinder, editor, and Kumar, Vivek, editor
- Published
- 2024
- Full Text
- View/download PDF
3. A generalized software reliability prediction model for module based software incorporating testing effort with cost model
- Author
-
Yadav, Akshay Kumar, Srivastava, Shilpa, and Pant, Millie
- Published
- 2024
- Full Text
- View/download PDF
4. A Software Reliability Model Considering a Scale Parameter of the Uncertainty and a New Criterion.
- Author
-
Song, Kwang Yoon, Kim, Youn Su, Pham, Hoang, and Chang, In Hong
- Subjects
- *
SOFTWARE reliability , *MULTIPLE criteria decision making , *SOFTWARE failures , *MODELS & modelmaking , *POISSON processes - Abstract
It is becoming increasingly common for software to operate in various environments. However, even if the software performs well in the test phase, uncertain operating environments may cause new software failures. Traditional proposed software reliability models under uncertain operating environments suffer from the problem of being well-suited to special cases due to the large number of assumptions involved. To improve these problems, this study proposes a new software reliability model that assumes an uncertain operating environment. The new software reliability model is a model that minimizes assumptions and minimizes the number of parameters that make up the model, so that the model can be applied to general situations better than the traditional proposed software reliability models. In addition, various criteria based on the difference between the predicted and estimated values have been used in the past to demonstrate the superiority of the software reliability models. Also, we propose a new multi-criteria decision method that can simultaneously consider multiple goodness-of-fit criteria. The multi-criteria decision method using ranking is useful for comprehensive evaluation because it does not rely on individual criteria alone by ranking and weighting multiple criteria for the model. Based on this, 21 existing models are compared with the proposed model using two datasets, and the proposed model is found to be superior for both datasets using 15 criteria and the multi-criteria decision method using ranking. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. A Software Reliability Model Considering a Scale Parameter of the Uncertainty and a New Criterion
- Author
-
Kwang Yoon Song, Youn Su Kim, Hoang Pham, and In Hong Chang
- Subjects
non-homogeneous Poisson process (NHPP) ,uncertain operating environments ,software reliability model ,multi-criteria decision method using ranking (MCDMR) ,Mathematics ,QA1-939 - Abstract
It is becoming increasingly common for software to operate in various environments. However, even if the software performs well in the test phase, uncertain operating environments may cause new software failures. Traditional proposed software reliability models under uncertain operating environments suffer from the problem of being well-suited to special cases due to the large number of assumptions involved. To improve these problems, this study proposes a new software reliability model that assumes an uncertain operating environment. The new software reliability model is a model that minimizes assumptions and minimizes the number of parameters that make up the model, so that the model can be applied to general situations better than the traditional proposed software reliability models. In addition, various criteria based on the difference between the predicted and estimated values have been used in the past to demonstrate the superiority of the software reliability models. Also, we propose a new multi-criteria decision method that can simultaneously consider multiple goodness-of-fit criteria. The multi-criteria decision method using ranking is useful for comprehensive evaluation because it does not rely on individual criteria alone by ranking and weighting multiple criteria for the model. Based on this, 21 existing models are compared with the proposed model using two datasets, and the proposed model is found to be superior for both datasets using 15 criteria and the multi-criteria decision method using ranking.
- Published
- 2024
- Full Text
- View/download PDF
6. A generalized multi-upgradation SRGM considering uncertainty of random field operating environments.
- Author
-
Mishra, Gaurav, Kapur, P. K., and Aggarwal, Anu G.
- Abstract
Nowadays software companies are releasing upgraded versions of applications or software on a weekly, fortnightly, and monthly basis. This is mainly to meet the customer's requirements and beat the market competition in terms of features, speed, reliability, security, and many more attributes like iOS, Android, Facebook, and others. Finally, this shows the importance of the multi-up-gradation of the software. During the last 4 decades, many SRGMs have been presented for single and multi-up-gradation of the applications to measure the number of bugs and reliability of applications. All these SRGMs were presented in the fixed settings of the software development environment, which is very much predictable. But after the release of the software, the field operating environment is very much unpredictable and random. That is the reason we call it a random field operating environment (RFOE). Numerous SRGMs have been presented with the assumption that operating environments and development settings are similar. We are unaware of the operating environments' uncertainties because these two environments are much different in practice. In this paper, we presented two multi-upgradation SRGMs to capture the uncertainty of bug detection rate per unit of time in the RFOE. We have examined the attainment of the given models using an actual failure data set. The results reveal that the goodness of fit and prognostication performance has improved significantly. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
7. Performance Evaluation of a Cloud Datacenter Using CPU Utilization Data.
- Author
-
Li, Chen, Zheng, Junjun, Okamura, Hiroyuki, and Dohi, Tadashi
- Subjects
- *
COMPUTER architecture , *VIRTUAL machine systems , *COMPUTER engineering , *SYSTEMS design , *POISSON processes , *CLOUD computing - Abstract
Cloud computing and its associated virtualization have already been the most vital architectures in the current computer system design. Due to the popularity and progress of cloud computing in different organizations, performance evaluation of cloud computing is particularly significant, which helps computer designers make plans for the system's capacity. This paper aims to evaluate the performance of a cloud datacenter Bitbrains, using a queueing model only from CPU utilization data. More precisely, a simple but non-trivial queueing model is used to represent the task processing of each virtual machine (VM) in the cloud, where the input stream is supposed to follow a non-homogeneous Poisson process (NHPP). Then, the parameters of arrival streams for each VM in the cloud are estimated. Furthermore, the superposition of estimated arrivals is applied to represent the CPU behavior of an integrated virtual platform. Finally, the performance of the integrated virtual platform is evaluated based on the superposition of the estimations. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
8. Condition‐based preventive maintenance with a yield rate threshold for deteriorating repairable systems.
- Author
-
Huang, Yeu‐Shiang, Fang, Chih‐Chiang, and Wijaya, Stevan
- Subjects
- *
CONDITION-based maintenance , *SYSTEMS availability , *POISSON processes , *MAINTENANCE costs , *PROBLEM solving , *SYSTEM failures - Abstract
Repairable systems deteriorate with age and usage. In order to maintain acceptable reliability and prevent sudden failures, preventive maintenance (PM) is often applied to such systems. Scheduled PM actions can improve system availability and minimize losses due to breakdowns and failures. However, condition‐based PM is considered more relevant to system status than age‐based PM for deteriorating repairable systems. In this study, we consider the production yield rate as the condition variable when determining the optimal PM schedule. A non‐homogeneous Poisson process is used to describe the system deterioration, and the concept of system effective (virtual) age is also considered to make the proposed model more realistic. Three different condition‐based PM strategies are presented to provide more diverse choices for decision makers in terms of solving problems based on the situation at hand. The results show that the optimal yield rate threshold can reduce system failure while maintaining acceptable system availability and product quality with affordable costs due to a proper PM schedule. Furthermore, the results of the numerical application showed that the maintenance and penalty costs are the most sensitive to the total cost, where the adoption of the proposed PM strategies are more profitable when related costs are low. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
9. An S-Shaped Fault Detection and Correction SRGM Subject to Gamma-Distributed Random Field Environment and Release Time Optimization
- Author
-
Pradhan, Vishal, Dhar, Joydip, Kumar, Ajay, Bhargava, Ashish, Verma, Ajit Kumar, Series Editor, Kapur, P. K., Series Editor, Kumar, Uday, Series Editor, Singh, Gurinder, editor, and Klochkov, Yury S., editor
- Published
- 2020
- Full Text
- View/download PDF
10. Reliability and optimal release time analysis for multi up-gradation software with imperfect debugging and varied testing coverage under the effect of random field environments.
- Author
-
Chatterjee, Subhashis, Saha, Deepjyoti, Sharma, Akhilesh, and Verma, Yogesh
- Subjects
- *
RANDOM fields , *SOFTWARE reliability , *INDUCTIVE effect , *DEBUGGING , *COMPUTER software - Abstract
Due to change requests for up-gradation of adding new features, software organizations always develop new versions of the software by adding new features and improving the existing software. Various software reliability growth models have been proposed considering realistic issue which affects the reliability growth of software. Testing coverage is a crucial realistic issue that influences the fault detection and correction process. The difficulty level for removing different faults is different, same kind of testing coverage function can't capture the fault detection process for all types of faults. Also, there exist random effects in the field environment due to the change between the testing environment and the operational environment. This randomness also affects the reliability growth of software. In this paper, a software reliability growth model has been proposed considering imperfect debugging, faults removal proportionality, two types of testing coverage function in the presence of random effect of the testing environment. Here different categories of faults have been considered. Though reliability is an important issue for software professionals, they are worried about the optimal release of software at an optimal cost. Considering the testing cost and debugging cost random, a cost model has been proposed for release time analysis. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
11. Reliability analysis of underground rock bolters using the renewal process, the non-homogeneous Poisson process and the Bayesian approach
- Author
-
La Roche-Carrier, Nicolas, Dituba Ngoma, Guyh, Kocaefe, Yasar, and Erchiqui, Fouad
- Published
- 2020
- Full Text
- View/download PDF
12. Testing-Effort based NHPP Software Reliability Growth Model with Change-point Approach.
- Author
-
PRADHAN, VISHAL, DHAR, JOYDIP, and KUMAR, AJAY
- Subjects
SOFTWARE reliability ,COMPUTER software development ,PROJECT managers ,POISSON processes - Abstract
A software project managers can execute well-prepared research tasks to utilize associated cost-effectively testing resources using software reliability growth models (SRGMs). Over the last four decades, several SRGMs are introduced to estimate reliability growth and applicable particular to software development research. So far, it seems that very few numbers of SRGMs recognize potential adjustments in test-effort consumption. In certain instances, testing-resource allocation practices may be modified with time. Thus, this study integrates the essential principle of multiple change-points with the testing-effort function in proposed models. Two benchmark datasets illustrate the efficiency and applicability of the proposed models. Normalized criteria distance is used to evaluate the models ranking based on four comparison criteria on two failure datasets. Experimental outcomes show that the proposed models offer reasonably better fault predictability compare to other models. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
13. Performance Evaluation of a Cloud Datacenter Using CPU Utilization Data
- Author
-
Chen Li, Junjun Zheng, Hiroyuki Okamura, and Tadashi Dohi
- Subjects
performance evaluation ,CPU utilization ,non-homogeneous Poisson process (NHPP) ,Mathematics ,QA1-939 - Abstract
Cloud computing and its associated virtualization have already been the most vital architectures in the current computer system design. Due to the popularity and progress of cloud computing in different organizations, performance evaluation of cloud computing is particularly significant, which helps computer designers make plans for the system’s capacity. This paper aims to evaluate the performance of a cloud datacenter Bitbrains, using a queueing model only from CPU utilization data. More precisely, a simple but non-trivial queueing model is used to represent the task processing of each virtual machine (VM) in the cloud, where the input stream is supposed to follow a non-homogeneous Poisson process (NHPP). Then, the parameters of arrival streams for each VM in the cloud are estimated. Furthermore, the superposition of estimated arrivals is applied to represent the CPU behavior of an integrated virtual platform. Finally, the performance of the integrated virtual platform is evaluated based on the superposition of the estimations.
- Published
- 2023
- Full Text
- View/download PDF
14. A Study of Incorporation of Deep Learning Into Software Reliability Modeling and Assessment.
- Author
-
Wu, Cheng-Yang and Huang, Chin-Yu
- Subjects
- *
SOFTWARE reliability , *DEEP learning , *OPEN source software , *SOFTWARE failures , *ALGORITHMS , *MATHEMATICAL formulas - Abstract
Software is widely used in many application domains. The most popular software are used by millions every day. How to accurately predict and assess the reliability of developed software is becoming increasingly important for project managers and developers. Previous studies have primarily used the software reliability growth model (SRGM) to evaluate and predict software reliability, but prediction results cannot be accurate at particular times or in particular situations. One of the main reasons is that simplified assumptions and abstractions are usually made to simplify the problem when developing SRGMs. Selecting an appropriate SRGM should depend on the key characteristics of the software project. In this article, we propose a deep learning-based approach for software reliability prediction and assessment. Specifically, we clearly demonstrate how to derive mathematical expressions from the computational methods of deep learning models and how to determine the correlation between them and the mathematical formula of SRGMs, and then, we use the back-propagation algorithm to obtain the SRGM parameters. Furthermore, we further integrate some deep learning-based SRGMs and also propose a method for the weighted assignment of combinations. Three real open source software failure datasets are used to evaluate the performance of the proposed models compared to selected SRGMs. The experimental results reveal that our proposed deep learning-based models and their combinations perform better than several classical SRGMs. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
15. Economic Impact of a Failure Using Life-Cycle Cost Analysis
- Author
-
Parra Márquez, Carlos, Crespo Márquez, Adolfo, González-Prida Díaz, Vicente, Gómez Fernández, Juan Francisco, Kristjanpoller Rodríguez, Fredy, Viveros Gunckel, Pablo, Crespo Márquez, Adolfo, editor, González-Prida Díaz, Vicente, editor, and Gómez Fernández, Juan Francisco, editor
- Published
- 2018
- Full Text
- View/download PDF
16. Modeling Software Fault-Detection and Fault-Correction Processes by Considering the Dependencies between Fault Amounts.
- Author
-
Li, Qiuying and Pham, Hoang
- Subjects
SOFTWARE reliability ,COMPUTER software development ,RANDOM variables ,LEAST squares ,COMPUTER software ,POISSON processes - Abstract
Many NHPP software reliability growth models (SRGMs) have been proposed to assess software reliability during the past 40 years, but most of them have focused on modeling the fault detection process (FDP) in two ways: one is to ignore the fault correction process (FCP), i.e., faults are assumed to be instantaneously removed after the failure caused by the faults is detected. However, in real software development, it is not always reliable as fault removal usually needs time, i.e., the faults causing failures cannot always be removed at once and the detected failures will become more and more difficult to correct as testing progresses. Another way to model the fault correction process is to consider the time delay between the fault detection and fault correction. The time delay has been assumed to be constant and function dependent on time or random variables following some kind of distribution. In this paper, some useful approaches to the modeling of dual fault detection and correction processes are discussed. The dependencies between fault amounts of dual processes are considered instead of fault correction time-delay. A model aiming to integrate fault-detection processes and fault-correction processes, along with the incorporation of a fault introduction rate and testing coverage rate into the software reliability evaluation is proposed. The model parameters are estimated using the Least Squares Estimation (LSE) method. The descriptive and predictive performance of this proposed model and other existing NHPP SRGMs are investigated by using three real data-sets based on four criteria, respectively. The results show that the new model can be significantly effective in yielding better reliability estimation and prediction. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
17. Reliability Analysis of Repairable Systems Based on a Two-Segment Bathtub-Shaped Failure Intensity Function
- Author
-
Xuejiao Du, Zhaojun Yang, Chuanhai Chen, Xiaoxu Li, and Michael G. Pecht
- Subjects
Bathtub-shaped failure intensity ,reliability model ,repairable system ,non-homogeneous Poisson process (NHPP) ,maximum likelihood method ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
The reliability analysis of complex repairable systems is an important but computationally intensive task that requires fitting the failure data of whole life cycles well. Because the existing reliability models are mainly based on the assumptions that systems are either unrepairable or will go through an “as good as new”type of repair which does not describe actual situations effectively and precisely, a two-segment failure intensity model based on sectional non-homogeneous Poisson process is developed. This model is capable of analyzing repairable systems with bathtub-shaped failure intensity. It considers minimal maintenance activities and preserves the time series of failures based on the whole life cycle. The advantages of this model lie in its flexibility to describe monotonic, non-monotonic failure intensities and its practicality to determine the burn-in or replacement time for repairable systems. Three real lifetime failure data sets are applied to illustrate the developed model. The results show that the model performs well regarding the Akaike information criterion value, mean squared errors, and Cramér-von Mises values.
- Published
- 2018
- Full Text
- View/download PDF
18. Modeling Software Fault-Detection and Fault-Correction Processes by Considering the Dependencies between Fault Amounts
- Author
-
Qiuying Li and Hoang Pham
- Subjects
imperfect debugging ,fault detection ,fault correction ,testing coverage ,software reliability growth model (SRGMs) ,non-homogeneous Poisson process (NHPP) ,Technology ,Engineering (General). Civil engineering (General) ,TA1-2040 ,Biology (General) ,QH301-705.5 ,Physics ,QC1-999 ,Chemistry ,QD1-999 - Abstract
Many NHPP software reliability growth models (SRGMs) have been proposed to assess software reliability during the past 40 years, but most of them have focused on modeling the fault detection process (FDP) in two ways: one is to ignore the fault correction process (FCP), i.e., faults are assumed to be instantaneously removed after the failure caused by the faults is detected. However, in real software development, it is not always reliable as fault removal usually needs time, i.e., the faults causing failures cannot always be removed at once and the detected failures will become more and more difficult to correct as testing progresses. Another way to model the fault correction process is to consider the time delay between the fault detection and fault correction. The time delay has been assumed to be constant and function dependent on time or random variables following some kind of distribution. In this paper, some useful approaches to the modeling of dual fault detection and correction processes are discussed. The dependencies between fault amounts of dual processes are considered instead of fault correction time-delay. A model aiming to integrate fault-detection processes and fault-correction processes, along with the incorporation of a fault introduction rate and testing coverage rate into the software reliability evaluation is proposed. The model parameters are estimated using the Least Squares Estimation (LSE) method. The descriptive and predictive performance of this proposed model and other existing NHPP SRGMs are investigated by using three real data-sets based on four criteria, respectively. The results show that the new model can be significantly effective in yielding better reliability estimation and prediction.
- Published
- 2021
- Full Text
- View/download PDF
19. Modelling age based replacement decisions considering shocks and failure rate
- Author
-
Akbar Alem Tabriz, Behrooz Khorshidvand, and Ashkan Ayough
- Published
- 2016
- Full Text
- View/download PDF
20. Open Source Software Reliability Model with the Decreasing Trend of Fault Detection Rate.
- Author
-
Wang, Jinyong and Mi, Xiaoping
- Subjects
- *
SOFTWARE reliability , *OPEN source software - Abstract
Software reliability assessment methods have been changed from closed to open source software (OSS). Although numerous new approaches for improving OSS reliability are formulated, they are not used in practice due to their inaccuracy. A new proposed model considering the decreasing trend of fault detection rate is developed in this study to effectively improve OSS reliability. We analyse the changes of the instantaneous fault detection rate over time by using real-world software fault count data from two actual OSS projects, namely, Apache and GNOME, to validate the proposed model performance. Results show that the proposed model with the decreasing trend of fault detection rate has better fitting and predictive performance than the traditional closed source software and other OSS reliability models. The proposed model for OSS can further accurately fit and predict the failure process and thus can assist in improving the quality of OSS systems in real-world OSS projects. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
21. A STUDY OF SOFTWARE RELIABILITY GROWTH WITH IMPERFECT DEBUGGING FOR TIME-DEPENDENT POTENTIAL ERRORS.
- Author
-
Kuei-Chen Chiu, Yeu-Shiang Huang, and I-Chi Huang
- Subjects
- *
SOFTWARE reliability , *DEBUGGING , *COMPUTER software testing , *SINE function , *SYSTEMS software , *POISSON processes - Abstract
Over the last few decades, various software reliability growth models (SRGM) have been proposed, and in recent years, a gradual but marked shift has led to a focus on the balance between acceptable software reliability and affordable software testing costs. Chiu et al.(2008) proposed an SRGM from the perspective of learning effects that is more flexible in terms of fitting various software error data although it has a restrictive assumption of a constant number of potential errors. In this paper, we consider a software reliability growth model in which the number of potential errors varies over the debugging period since wrongly fixing an error may cause more errors, while correctly debugging one error may resolve others. However, such variations will gradually converge as the testing staff becomes more familiar with the software system. In order to describe this phenomenon, a sine function is introduced in this study to describe the time-dependent behavior of the number of potential errors with imperfect debugging, and the expected testing cost is thus evaluated to assist in the determination of an optimal software release policy. A numerical example is illustrated to verify the effectiveness of the proposed approach. [ABSTRACT FROM AUTHOR]
- Published
- 2019
22. Software reliability growth models: A comparison of linear and exponential fault content functions for study of imperfect debugging situations
- Author
-
Javaid Iqbal
- Subjects
software reliability ,reliability growth ,software reliability growth model (srgm) ,non-homogeneous poisson process (nhpp) ,learning effect ,fault detection rate (fdr) ,imperfect debugging ,Engineering (General). Civil engineering (General) ,TA1-2040 - Abstract
The software testing process basically aims at building confidence in the software for its use in real world applications. The reliability of a software system is always important to us. As we carry out the error detection and correction phenomenon on our software, the reliability of the software grows. With an aim to model this growth in the software reliability, many formulations in the form of Software Reliability Growth Models have been proposed. Many of these are based on Non-Homogeneous Poisson Process framework. In this paper, a parallel comparison of the performance of the proposed software reliability growth models is carried out, considering linear and exponential fault content functions for study of imperfect debugging situations. The performance of the proposed models has been compared with some famous existing software reliability models and the proposed models have been validated on some real-world datasets. Three goodness-of-fit criteria that include mean square error, predictive-ratio risk and predictive power are used to carry out the performance comparison of the models. Using these comparison criteria on six actual failure datasets, it is concluded that the proposed Model-2 which always outperforms Model-1, fits the actual failure data better and has better predictive power than other considered SRGMs for at least two data sets.
- Published
- 2017
- Full Text
- View/download PDF
23. NHPP software reliability model considering the uncertainty of operating environments with imperfect debugging and testing coverage.
- Author
-
Li, Qiuying and Pham, Hoang
- Subjects
- *
COMPUTER software , *SOFTWARE failures , *RELIABILITY in engineering , *DEBUGGING , *SENSITIVITY analysis , *POISSON processes - Abstract
In this paper, we propose a testing-coverage software reliability model that considers not only the imperfect debugging (ID) but also the uncertainty of operating environments based on a non-homogeneous Poisson process (NHPP). Software is usually tested in a given control environment, but it may be used in different operating environments by different users, which are unknown to the developers. Many NHPP software reliability growth models (SRGMs) have been developed to estimate the software reliability measures, but most of the underlying common assumptions of these models are that the operating environment is the same as the developing environment. But in fact, due to the unpredictability of the uncertainty in the operating environments for the software, environments may considerably influence the reliability and software's performance in an unpredictable way. So when a software system works in a field environment, its reliability is usually different from the theory reliability, and also from all its similar applications in other fields. In this paper, a new model is proposed with the consideration of the fault detection rate based on the testing coverage and examined to cover ID subject to the uncertainty of operating environments. We compare the performance of the proposed model with several existing NHPP SRGMs using three sets of real software failure data based on seven criteria. Improved normalized criteria distance (NCD) method is also used to rank and select the best model in the context of a set of goodness-of-fit criteria taken all together. All results demonstrate that the new model can give a significant improved goodness-of-fit and predictive performance. Finally, the optimal software release time based on cost and reliability requirement and its sensitivity analysis are discussed. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
24. MODELING OF SOFTWARE FAULT DETECTION AND CORRECTION PROCESSES WITH FAULT DEPENDENCY.
- Author
-
Rui PENG and Qingqing ZHAI
- Subjects
FAULT diagnosis ,DYNAMIC testing ,CHEMICAL process control ,ELECTRIC power system faults ,ELECTRIC faults ,COMPUTER software - Abstract
Copyright of Maintenance & Reliability / Eksploatacja i Niezawodność is the property of Polish Scientific & Technical Society Consumables, Polish Maintenance Society and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2017
- Full Text
- View/download PDF
25. An imperfect software debugging model considering irregular fluctuation of fault introduction rate.
- Author
-
Wang, Jinyong
- Subjects
FAULT location (Engineering) ,DEBUGGING ,COMPUTER software ,POISSON processes ,SOFTWARE reliability - Abstract
In general, software testing is a complicated and uncertain process. New faults can be introduced into the software during each fault removal. This process is called imperfect debugging. For simplicity, fault introduction rates are generally assumed to be constant. However, software debugging can be affected by many factors, such as subjective and objective influences, the difficulty and complexity of fault removal, the dependent relationships among faults, the changes in different phases of software testing, and the test schedules. Thus, the rate of fault introduction is not a constant, but is an irregularly fluctuating variable in software debugging. In this article, we propose a model with imperfect software debugging considering the irregular changes in fault introduction rates during software debugging. Experimental results reveal that our proposed model has good fitting capability and considerably stronger forecasting performance than that of the other models, and that the proposed model assumptions are close to the actual software debugging situation. Moreover, research on the irregular fluctuation of the fault introduction rate in software debugging has a certain reference value and important significance for software-intensive product testing, for instance, cloud computing. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
26. Lehmann-Type Laplace distribution-Type I software reliability growth model.
- Author
-
Akilandeswari, V., Poornima, R., and Saavithri, V.
- Abstract
In this paper, Lehmann-Type Laplace Type I reliability growth model is proposed for early detection of software failure based on time between failure observations. Cumulative time between failures of the software data is assumed to follow Lehmann-Type Laplace distribution-Type I (LLD-Type I). The parameters are estimated using Profile Likelihood Method. In terms of AIC and BIC, this distribution is found to be a better fit for the software failure data than Goel-Okumoto, Weibull, Pareto Type III and Kumaraswamy Modified Inverse Weibull distributions which are commonly used in reliability analysis. A LLD-Type I control mechanism is used to detect the failure points of a software data. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
27. Software Reliability Growth Model Considering First-Step and Second-Step Fault Dependency.
- Author
-
Peng, Rui, Ma, Xiaoyang, Zhai, Qingqing, and Gao, Kaiye
- Abstract
As one of the most important indexes to evaluate the quality of software, software reliability experiences an increasing development in recent years. We investigate a software reliability growth model (SRGM). The application of this model is to predict the occurrence of the software faults based on the non-homogeneous Poisson process (NHPP). Unlike the independent assumptions in other models, we consider fault dependency. The testing faults are divided into three classes in this model: leading faults, first-step dependent faults and second-step dependent faults. The leading faults occurring independently follow an NHPP, while the first-step dependent faults only become detectable after the related leading faults are detected. The second-step dependent faults can only be detected after the related first-step dependent faults are detected. Then, the combined model is built on the basis of the three sub-processes. Finally, an illustration based on real dataset is presented to verify the proposed model. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
28. A software reliability model with time-dependent fault detection and fault removal.
- Author
-
Zhu, Mengmeng and Pham, Hoang
- Subjects
SOFTWARE reliability ,RELIABILITY in engineering ,COMPUTER reliability ,COMPUTER performance ,TIME series analysis ,FAULT-tolerant computing - Abstract
The common assumption for most existingsoftware reliability growth models is that fault is independent and can be removed perfectly upon detection. However, it is often not true due to various factors including software complexity, programmer proficiency, organization hierarchy, etc. In this paper, we develop a software reliability model with considerations of fault-dependent detection, imperfect fault removal and the maximum number of faults software. The genetic algorithm (GA) method is applied to estimate the model parameters. Four goodness-of-fit criteria, such as mean-squared error, predictive-ratio risk, predictive power, and Akaike information criterion, are used to compare the proposed model and several existing software reliability models. Three datasets collected in industries are used to demonstrate the better fit of the proposed model than other existing software reliability models based on the studied criteria. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
29. Modelling age based replacement decisions considering shocks and failure rate.
- Author
-
Alem Tabriz, Akbar, Khorshidvand, Behrooz, and Ayough, Ashkan
- Abstract
Purpose – The purpose of this paper is to present age-based replacement models subject to shocks and failure rate in order to determine the optimal replacement cycle. As a result, according to system reliability, maintenance costs of the system are to be minimized. Design/methodology/approach – First, the modeling with respect to assumptions and two major factors (shocks and failure rate) is done. Second, by using of MATLAB the optimal parameters are obtained. Finally, analysis of results and comparison of models are done. Findings – Analysis of results shows all models provide optimal replacement cycle and at this time, cost rate of the system by considering the reliability rate is minimal. Also with an increase of one unit to two units, reliability rate increases much higher than the rate of cost. Originality/value – This work provides models that in addition to considering the failure rate (internal factors), also shocks as an external factor have been considered. By considering these two factors more comprehensive and adaptable models have been proposed. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
30. Modeling Software Fault-Detection and Fault-Correction Processes by Considering the Dependencies between Fault Amounts
- Author
-
Hoang Pham and Qiuying Li
- Subjects
Technology ,0209 industrial biotechnology ,QH301-705.5 ,Computer science ,QC1-999 ,testing coverage ,non-homogeneous Poisson process (NHPP) ,0211 other engineering and technologies ,02 engineering and technology ,Fault (power engineering) ,Fault detection and isolation ,software reliability growth model (SRGMs) ,020901 industrial engineering & automation ,fault correction ,General Materials Science ,Biology (General) ,QD1-999 ,Instrumentation ,Reliability (statistics) ,Fluid Flow and Transfer Processes ,021103 operations research ,business.industry ,Physics ,Process Chemistry and Technology ,General Engineering ,Process (computing) ,Software development ,Function (mathematics) ,Engineering (General). Civil engineering (General) ,Software quality ,fault detection ,Computer Science Applications ,Reliability engineering ,Chemistry ,imperfect debugging ,TA1-2040 ,business ,Random variable - Abstract
Many NHPP software reliability growth models (SRGMs) have been proposed to assess software reliability during the past 40 years, but most of them have focused on modeling the fault detection process (FDP) in two ways: one is to ignore the fault correction process (FCP), i.e., faults are assumed to be instantaneously removed after the failure caused by the faults is detected. However, in real software development, it is not always reliable as fault removal usually needs time, i.e., the faults causing failures cannot always be removed at once and the detected failures will become more and more difficult to correct as testing progresses. Another way to model the fault correction process is to consider the time delay between the fault detection and fault correction. The time delay has been assumed to be constant and function dependent on time or random variables following some kind of distribution. In this paper, some useful approaches to the modeling of dual fault detection and correction processes are discussed. The dependencies between fault amounts of dual processes are considered instead of fault correction time-delay. A model aiming to integrate fault-detection processes and fault-correction processes, along with the incorporation of a fault introduction rate and testing coverage rate into the software reliability evaluation is proposed. The model parameters are estimated using the Least Squares Estimation (LSE) method. The descriptive and predictive performance of this proposed model and other existing NHPP SRGMs are investigated by using three real data-sets based on four criteria, respectively. The results show that the new model can be significantly effective in yielding better reliability estimation and prediction.
- Published
- 2021
- Full Text
- View/download PDF
31. Multi-release software model based on testing coverage incorporating random effect (SDE).
- Author
-
Bibyan R, Anand S, Aggarwal AG, and Kaur G
- Abstract
In the past, various Software Reliability Growth Models (SRGMs) have been proposed using different parameters to improve software worthiness. Testing Coverage is one such parameter that has been studied in numerous models of software in the past and it has proved its influence on the reliability models. To sustain themselves in the market, software firms keep upgrading their software with new features or enhancements by rectifying previously reported faults. Also, there is an impact of the random effect on testing coverage during both the testing and operational phase. In this paper, we have proposed a Software reliability growth model based on testing coverage with random effect along with imperfect debugging. Later, the multi-release problem is presented for the proposed model. The proposed model is validated on the dataset from Tandem Computers. The results for each release of the models have been discussed based on the different performance criteria. The numerical results illustrate that models fit the failure data significantly.•The random effect in the testing coverage rate is handled using Stochastic Differential Equations (SDE).•Three testing coverage functions used are Exponential, Weibull, and S-shaped.•Four Releases of the software model has been presented., Competing Interests: The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper., (© 2023 The Author(s).)
- Published
- 2023
- Full Text
- View/download PDF
32. A software reliability growth model with two types of learning and a negligence factor.
- Author
-
Iqbal, Javaid, Ahmad, N., and Quadri, S. M. K.
- Abstract
Reliability attribute of dynamic software systems is a key to the normal operational behavior of such systems. Although the acquisition of perfect(cent percent) level of reliability for software may be practically very difficult but achieving near-perfect reliability growth levels is very much possible using reliability engineering study. Many SRGMs have been proposed including some based on Non-Homogeneous Poisson Process (NHPP). The realistic characteristics about human learning and experiential gains of new skills for better detection and correction of faults on software are being incorporated in such models. This paper incorporates two types of learning effects and a negligence factor into the SRGM with learning effect proposed by Chiu, Huang and Lee, taking advantage of the improvement proposed by Chiu, in terms of introduction of a negligent factor, in Chiu, Huang and Lee SRGM. In this paper, we simultaneously incorporate learning effect that exists in two forms: one is autonomous learning and the other is acquired learning as well as a negligence factor. The resultant model equations are subjected to the statistical analysis and the results are satisfactory. [ABSTRACT FROM PUBLISHER]
- Published
- 2013
- Full Text
- View/download PDF
33. Software Reliability Growth Model with testing effort using learning function.
- Author
-
Khatri, Sunil Kumar, Kumar, Deepak, Dwivedi, Asit, and Mrinal, Nitika
- Abstract
Software Reliability Growth Models have been proposed in the literature to measure the quality of software and to release the software at minimum cost. Testing is an important part to find out faults during Software Development Life Cycle of integrated software. Testing can be defined as the execution of a program to find a fault which might have been introduced during the testing time under different assumptions. The testing team may not be able to remove the fault perfectly on the detection of the failure and the original fault may remain or get replaced by another fault. While the former phenomenon is known as imperfect fault removal, the latter is called error generation. In this paper, we have proposed a new SRGM with two types of imperfect debugging with testing effort using learning function reflecting the expertise gained by testing team depending on its complexity, the skills of the debugging team, the available manpower and the software development environment and it is estimated and compared other existing models on real time data sets. These estimation result shows compare performance and application of different SRGM with testing effort. [ABSTRACT FROM PUBLISHER]
- Published
- 2012
- Full Text
- View/download PDF
34. A new software reliability model with Vtub-shaped fault-detection rate and the uncertainty of operating environments.
- Author
-
Pham, Hoang
- Subjects
- *
SOFTWARE reliability , *DEBUGGING , *POISSON processes , *RANKING (Statistics) , *MEAN square algorithms , *PREDICTION models - Abstract
Many software reliability growth models (SRGMs) have developed in the past three decades to estimate software reliability measures such as the number of remaining faults and software reliability. The underlying common assumption of many existing models is that the operating environment and the developing environment are the same. This is often not the case in practice because the operating environments are usually unknown due to the uncertainty of environments in the field. In this paper, we develop a new software reliability model incorporating the uncertainty of system fault-detection rate per unit of time subject to operating environments. Examples are included to illustrate the goodness-of-fit of proposed model and several existing non-homogeneous Poisson process (NHPP) models based on a set of failure data collected from software applications. Three goodness-of-fit criteria, such as mean square error, predictive power and predictive-ratio risk, are used as an example to illustrate model comparisons. The results show that the proposed model fit significantly better than other existing NHPP models based on mean square error value. As we know, different criteria have different impact in measuring the software reliability and that no software reliability model is optimal for all contributing criteria. In this paper, we discuss a new method called, normalized criteria distance, for ranking and selecting the best model from among SRGMs based on a set of criteria taken all together. Example results show the proposed method offers a promising technique for selecting the best model based on a set of contributing criteria. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
35. Reliability Analysis of Repairable Systems Based on a Two-Segment Bathtub-Shaped Failure Intensity Function
- Author
-
Xiaoxu Li, Du Xuejiao, Chuanhai Chen, Michael Pecht, and Zhaojun Yang
- Subjects
Flexibility (engineering) ,maximum likelihood method ,021110 strategic, defence & security studies ,021103 operations research ,General Computer Science ,Series (mathematics) ,Bathtub ,Computer science ,non-homogeneous Poisson process (NHPP) ,0211 other engineering and technologies ,General Engineering ,reliability model ,02 engineering and technology ,Maintenance engineering ,Bathtub-shaped failure intensity ,Reliability engineering ,General Materials Science ,lcsh:Electrical engineering. Electronics. Nuclear engineering ,Akaike information criterion ,lcsh:TK1-9971 ,Reliability model ,Reliability (statistics) ,repairable system ,Weibull distribution - Abstract
The reliability analysis of complex repairable systems is an important but computationally intensive task that requires fitting the failure data of whole life cycles well. Because the existing reliability models are mainly based on the assumptions that systems are either unrepairable or will go through an “as good as new”type of repair which does not describe actual situations effectively and precisely, a two-segment failure intensity model based on sectional non-homogeneous Poisson process is developed. This model is capable of analyzing repairable systems with bathtub-shaped failure intensity. It considers minimal maintenance activities and preserves the time series of failures based on the whole life cycle. The advantages of this model lie in its flexibility to describe monotonic, non-monotonic failure intensities and its practicality to determine the burn-in or replacement time for repairable systems. Three real lifetime failure data sets are applied to illustrate the developed model. The results show that the model performs well regarding the Akaike information criterion value, mean squared errors, and Cramér-von Mises values.
- Published
- 2018
- Full Text
- View/download PDF
36. Incorporating S-shaped testing-effort functions into NHPP software reliability model with imperfect debugging.
- Author
-
Qiuying Li, Haifeng Li, and Minyan Lu
- Subjects
- *
MATHEMATICAL functions , *DEBUGGING , *SOFTWARE reliability , *COMPUTER storage devices , *COMPARATIVE studies - Abstract
Testing-effort (TE) and imperfect debugging (ID) in the reliability modeling process may further improve the fitting and prediction results of software reliability growth models (SRGMs). For describing the S-shaped varying trend of TE increasing rate more accurately, first, two S-shaped testing-effort functions (TEFs), i.e., delayed S-shaped TEF (DS-TEF) and inflected S-shaped TEF (IS-TEF), are proposed. Then these two TEFs are incorporated into various types (exponential-type, delayed S-shaped and inflected S-shaped) of non-homogeneous Poisson process (NHPP) SRGMs with two forms of ID respectively for obtaining a series of new NHPP SRGMs which consider S-shaped TEFs as well as ID. Finally these new SRGMs and several comparison NHPP SRGMs are applied into four real failure data-sets respectively for investigating the fitting and prediction power of these new SRGMs. The experimental results show that: (i) the proposed IS-TEF is more suitable and flexible for describing the consumption of TE than the previous TEFs; (ii) incorporating TEFs into the inflected S-shaped NHPP SRGM may be more effective and appropriate compared with the exponential-type and the delayed S-shaped NHPP SRGMs; (iii) the inflected S-shaped NHPP SRGM considering both IS-TEF and ID yields the most accurate fitting and prediction results than the other comparison NHPP SRGMs. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
37. Reliability evaluation of hard disk drive failures based on counting processes
- Author
-
Ye, Zhi-Sheng, Xie, Min, and Tang, Loon-Ching
- Subjects
- *
RELIABILITY in engineering , *HARD disks , *SYSTEM failures , *PARAMETER estimation , *STATISTICAL hypothesis testing , *COMPUTER simulation - Abstract
Abstract: Reliability assessment for hard disk drives (HDDs) is important yet difficult for manufacturers. Motivated by the fact that the particle accumulation in the HDDs, which accounts for most HDD catastrophic failures, is contributed from the internal and external sources, a counting process with two arrival sources is proposed to model the particle cumulative process in HDDs. This model successfully explains the collapse of traditional ALT approaches for accelerated life test data. Parameter estimation and hypothesis tests for the model are developed and illustrated with real data from a HDD test. A simulation study is conducted to examine the accuracy of large sample normal approximations that are used to test existence of the internal and external sources. [Copyright &y& Elsevier]
- Published
- 2013
- Full Text
- View/download PDF
38. Enhancing the accuracy of software reliability prediction through quantifying the effect of test phase transitions
- Author
-
Lin, Chu-Ti
- Subjects
- *
SOFTWARE reliability , *PREDICTION models , *CUSTOMER satisfaction , *BUDGET , *PERFORMANCE evaluation , *COST effectiveness - Abstract
Abstract: Since marketing unreliable software products will lead to customer dissatisfaction, the products usually undergo several phases of testing to minimize the number of faults before being released to the market. However, because of budget limitations and time constraints, very long test periods are impractical. Thus, project managers need to balance the cost of testing and the possible effects of any remaining (undetected) faults. Software reliability growth models (SRGMs) can be used to predict the fault detection process and help project managers determine a cost-effective time to stop testing and release the product. In practice, software testing may be divided into several phases, each of which has a different objective. Few existing SRGMs consider the influence of test phase transitions even though they may have a significant effect on fault detection during the test phase. Therefore, to address this research gap, we quantify the variations in the effect of different test phases and propose a software reliability modeling framework. The SRGMs obtained from the proposed framework can be used to gauge the influence of test phase transitions. We validated the framework’s performance on a failure data set collected from a real software project. The results demonstrate that the proposed framework accurately reflects the influence of test phase transitions and yields a strong performance in terms of fitting as well as predicting the fault detection process. [Copyright &y& Elsevier]
- Published
- 2012
- Full Text
- View/download PDF
39. Reliability assessment of multiple NC machine tools with minimal repair.
- Author
-
WANG Zhi-ming, YANG Jian-guo, WANG Guo-qiang, and ZHANG Gen-bao
- Subjects
RELIABILITY in engineering ,MACHINE tools ,FAILURE time data analysis ,MACHINE parts ,STOCHASTIC processes - Abstract
A repairable system approach of reliability assessment for multi-NC machine tools with time truncation based on stochastic point process is proposed, and a non-homogeneous Poisson process (NHPP) model for failure time is built. The point and confidence interval (CI) estimators of model parameters and reliability indices, such as accumulated mean-time-between-failure (MTBF) and failure intensity at truncated time, are all given by Fisher Information Matrix (FIM) method. The values of Akaike information criterion (AIC) show that repairable system approach has advantage over statistical distribution method when failure times of NC machine tools with minimal repair have monotonic trend in reliability analysis. The proposed model is passed by trend testing, renewal process testing and goodness-fit-test respectively. [ABSTRACT FROM AUTHOR]
- Published
- 2011
40. Enhancing software reliability modeling and prediction through the introduction of time-variable fault reduction factor
- Author
-
Hsu, Chao-Jung, Huang, Chin-Yu, and Chang, Jun-Ru
- Subjects
- *
COMPUTER software , *RELIABILITY in engineering , *PREDICTION models , *SOFTWARE productivity , *ESTIMATION theory , *COMPUTER software testing , *POISSON processes , *DEBUGGING - Abstract
Abstract: Over the past three decades, many software reliability models with different parameters, reflecting various testing characteristics, have been proposed for estimating the reliability growth of software products. We have noticed that one of the most important parameters controlling software reliability growth is the fault reduction factor (FRF) proposed by Musa. FRF is generally defined as the ratio of net fault reduction to failures experienced. During the software testing process, FRF could be influenced by many environmental factors, such as imperfect debugging, debugging time lag, etc. Thus, in this paper, we first analyze some real data to observe the trends of FRF, and consider FRF to be a time-variable function. We further study how to integrate time-variable FRF into software reliability growth modeling. Some experimental results show that the proposed models can improve the accuracy of software reliability estimation. Finally, sensitivity analyses of various optimal release times based on cost and reliability requirements are discussed. The analytic results indicate that adjusting the value of FRF may affect the release time as well as the development cost. [ABSTRACT FROM AUTHOR]
- Published
- 2011
- Full Text
- View/download PDF
41. Software reliability analysis and assessment using queueing models with multiple change-points
- Author
-
Huang, Chin-Yu and Hung, Tsui-Ying
- Subjects
- *
RELIABILITY in engineering , *QUEUING theory , *DEBUGGING , *COMPUTER software development , *COMPUTER software testing - Abstract
Abstract: Over the past three decades, many software reliability growth models (SRGMs) have been proposed, and they can be used to predict and estimate software reliability. One common assumption of these conventional SRGMs is to assume that detected faults will be removed immediately. In reality, this assumption may not be reasonable and may not always occur. During debugging, developers need time to reproduce the failure, identify the root causes of faults, fix them, and then re-run the software. From some experiments or observations, the fault correction rate may not be a constant and could be changed at certain points as time proceeds. Consequently, in this paper, we will investigate and study how to apply queueing models to describe the fault detection and correction processes during software development. We propose an extended infinite server queueing model with multiple change-points to predict and assess software reliability. Experimental results based on real failure data show that the proposed model can depict the change of fault correction rates and predict the behavior of software development more accurately than traditional SRGMs. [Copyright &y& Elsevier]
- Published
- 2010
- Full Text
- View/download PDF
42. Staffing Level and Cost Analyses for Software Debugging Activities Through Rate-Based Simulation Approaches.
- Author
-
Chu-Ti Lin and Chin-Yu Huang
- Subjects
- *
SIMULATION methods & models , *COST analysis , *POISSON processes , *COMPUTER software testing , *DEBUGGING - Abstract
Research in the field of software reliability, dedicated to the analysis of software failure processes, is quite diverse. In recent years, several attractive rate-based simulation approaches have been proposed. Thus far, it appears that most existing simulation approaches do not take into account the number of available debuggers (or developers). In practice, the number of debuggers will be carefully controlled. If all debuggers are busy, they may not address newly detected faults for some time. Furthermore, practical experience shows that fault-removal time is not negligible, and the number of removed faults generally lags behind the total number of detected faults, because fault detection activities continue as faults are being removed. Given these facts, we apply the queueing theory to describe and explain possible debugging behavior during software development. Two simulation procedures are developed based on G/G/∞ and G/G/m queueing models, respectively. The proposed methods will be illustrated using real software failure data. The analysis conducted through the proposed framework can help project managers assess the appropriate staffing level for the debugging team from the stand- point of performance, and cost-effectiveness. [ABSTRACT FROM AUTHOR]
- Published
- 2009
- Full Text
- View/download PDF
43. Flexible software reliability growth model with testing effort dependent learning process
- Author
-
Kapur, P.K., Goswami, D.N., Bardhan, Amit, and Singh, Ompal
- Subjects
- *
MATHEMATICAL models of economic development , *MATHEMATICAL models , *LEARNING , *ESTIMATION theory - Abstract
Abstract: A lot of importance has been attached to the testing phase of the Software Development Life Cycle (SDLC). It is during this phase it is checked whether the software product meets user requirements or not. Any discrepancies that are identified are removed. But testing needs to be monitored to increase its effectiveness. Software Reliability Growth Models (SRGMs) that specify mathematical relationships between the failure phenomenon and time have proved useful. SRGMs that include factors that affect failure process are more realistic and useful. Software fault detection and removal during the testing phase of SDLC depend on how testing resources (test cases, manpower and time) are used and also on previously identified faults. With this motivation a Non-Homogeneous Poisson Process (NHPP) based SRGM is proposed in this paper which is flexible enough to describe various software failure/reliability curves. Both testing efforts and time dependent fault detection rate (FDR) are considered for software reliability modeling. The time lag between fault identification and removal has also been depicted. The applicability of our model is shown by validating it on software failure data sets obtained from different real software development projects. The comparisons with established models in terms of goodness of fit, the Akaike Information Criterion (AIC), Mean of Squared Errors (MSE), etc. have been presented. [Copyright &y& Elsevier]
- Published
- 2008
- Full Text
- View/download PDF
44. Software Reliability Analysis and Measurement Using Finite and Infinite Server Queueing Models.
- Author
-
Chin-Yu Huang and Wei-Chih Huang
- Subjects
- *
SOFTWARE engineering , *RELIABILITY in engineering , *QUEUING theory , *COMPUTER software , *SOFTWARE measurement , *FAULT tolerance (Engineering) - Abstract
Software reliability is often defined as the probability of failure-free software operation for a specified period of time in a specified environment. During the past 30 years, many software re- liability growth models (SRGM) have been proposed for estimating the reliability growth of software. In practice, effective debugging is not easy because the fault may not be immediately obvious. Software engineers need time to read, and analyze the collected failure data. The time delayed by the fault detection & correction processes should not be negligible. Experience shows that the software debugging process can be described, and modeled using queueing system. In this paper, we will use both finite, and infinite server queueing models to predict software reliability. We will also investigate the problem of imperfect debugging, where fixing one bug creates another. Numerical examples based on two sets of real failure data are presented, and discussed in detail. Experimental results show that the proposed framework incorporating both fault detection, and correction processes for SRGM has a fairly accurate prediction capability. [ABSTRACT FROM AUTHOR]
- Published
- 2008
- Full Text
- View/download PDF
45. Flexible Software Reliability Growth Models for Distributed Systems.
- Author
-
Kapur, P., Gupta, Amit, Kumar, Archana, and Yamada, Shigeru
- Abstract
With the increasing demands on resources and skills needed to complete complex software projects, there is a steady move towards distributed working. Distributed software development is being made feasible owing to rapid advances in communication technologies. Distributed systems often involve development teams that are located across company sites, organizations, sectors and nations; as such there are special risks involved that are over and above the normal risks of software development. A distributed development project with some or all of the software components generated by different teams presents complex issues of quality and reliability of the software. The need is growing to estimate, risk assess, plan and manage the development of these distributed components and the final full system release. In this paper, an attempt has been made to compare Non Homogeneous Poisson Process (NHPP) based models in a distributed development environment. Proposed (NHPP) model assumes that the software system consists of a finite number of reused and newly developed sub-systems. The reused sub-systems do not consider the effect of severity of the faults on the software reliability growth phenomenon because they stabilize over a period of time i.e. the growth is uniform whereas, the newly developed sub-system does consider that. Fault removal phenomenon for reused and newly developed sub-systems have been modeled separately and is summed up to get the total fault removal phenomenon of the software system. The performance of SRGMs are judged by their ability to fit the past software fault data (goodness of fit) and to predict satisfactorily the future behavior of the software fault removal process (predictive validity) [ABSTRACT FROM AUTHOR]
- Published
- 2005
- Full Text
- View/download PDF
46. Simple Plots Improve Software Reliability Prediction Models.
- Author
-
Lawson, JohnS., Wesselman, CraigW., and Scott, DelT.
- Subjects
SOFTWARE engineering ,POISSON processes ,RELIABILITY in engineering - Abstract
Software reliability prediction is accomplished by fitting a nonhomogeneous Poisson process (NHPP) model to data from software testing. The data consist of the cumulative time and the cumulative number of failures found in software testing. The NHPP model can be used to predict the reliability of the software product at the time of release or to determine how much further testing must be done to reach a specified failure rate. Models are normally fitted to software testing data using Poisson regression by the method of maximum likelihood. We encountered difficulties fitting models when numerical algorithms failed to converge or when we were unable to discriminate among several models with the same number of parameters. These difficulties were the result of having no likelihood ratio test to compare models with the same number of parameters and anomalies in the data that caused numerical algorithms to fail. We found that a simple cumulative plot of the data (cumulative failures on the vertical axis vs. cumulative test time on the horizontal axis) helped in spotting anomalies in the data and selecting an appropriate model to fit. A second plot of running products of ratios of the probability densities for the predictions made from competing models, called the prequential likelihood ratio, helped in discriminating between models. Use of these plots helped resolve the difficulties we experienced in fitting models to the software testing data. [ABSTRACT FROM AUTHOR]
- Published
- 2003
- Full Text
- View/download PDF
47. Calibrating software reliability models when the test environment does not match the user environment.
- Author
-
Xuemei Zhang, Jeske, Daniel R., and Hoang Pham
- Subjects
CALIBRATION ,COMPUTER software ,RELIABILITY in engineering ,QUALITY control ,STOCHASTIC processes ,MATHEMATICAL models - Abstract
Software failures have become the major factor that brings the system down or causes a degradation in the quality of service. For many applications, estimating the software failure rate from a user's perspective helps the development team evaluate the reliability of the software and determine the release time properly. Traditionally, software reliability growth models are applied to system test data with the hope of estimating the software failure rate in the field. Given the aggressive nature by which the software is exercised during system test, as well as unavoidable differences between the test environment and the field environment, the resulting estimate of the failure rate will not typically reflect the user-perceived failure rate in the field. The goal of this work is to quantify the mismatch between the system test environment and the field environment. A calibration factor is proposed to map the failure rate estimated from the system test data to the failure rate that will be observed in the field. Non-homogeneous Poisson process models are utilized to estimate the software failure rate in both the system test phase and the field. For projects that have only system test data, use of the calibration factor provides an estimate of the field failure rate that would otherwise be unavailable. For projects that have both system test data and previous field data, the calibration factor can be explicitly evaluated and used to estimate the field failure rate of future releases as their system test data becomes available. Copyright © 2002 John Wiley & Sons, Ltd. [ABSTRACT FROM AUTHOR]
- Published
- 2002
- Full Text
- View/download PDF
48. A Logistic Fault-Dependent Detection Software Reliability Model
- Author
-
Pham, Hoang
- Subjects
predictive power ,normalized-rank Euclidean distance ,predictive-ratio risk ,non-homogeneous Poisson process (NHPP) ,software reliability growth model ,logistic fault-dependent detection - Abstract
In this paper, we present a logistic fault-dependent detection model where the dependent-rate of detected faults in the software can grow much faster from the beginning but grow slowly as the testing progresses until it reaches the maximum number of faults in the software. The explicit function of the expected number of software failures detected by time t, called mean value function, of the proposed model is derived. Model analysis is discussed based on normalized-rank Euclidean distance (RED) and other criteria to illustrate the goodness-of-fit criteria of proposed model and compare it to several existing NHPP models using a set of software failure data. The confidence interval for the parameter estimates of the proposed model is also presented. A numerical analysis based on a real data set of the 7 or higher magnitude earthquake in the United States to illustrate the goodness-of-fit of the proposed model and a recent logistic growth model is also discussed. The results show that the proposed model fit significantly better than all the existing software reliability growth models.
- Published
- 2018
- Full Text
- View/download PDF
49. Change-point estimation for repairable systems combining bootstrap control charts and clustering analysis: Performance analysis and a case study.
- Author
-
Yang, Z. J., Du, X. J., Chen, F., Chen, C. H., Tian, H. L., and He, J. L.
- Subjects
- *
CHANGE-point problems , *STATISTICAL bootstrapping , *INDUSTRIAL management , *POISSON processes , *NUMERICAL control of machine tools - Abstract
Complex repairable systems with bathtub-shaped failure intensity will normally go through three periods in the lifecycle, which requires maintenance policies and management decisions accordingly. Therefore, the accurate estimation of change points of different periods has great significance. This paper addresses the challenge of change-point estimation in failure processes for repairable systems, especially for sustained and gradual processes of change. The paper proposes a sectional model composed of two non-homogeneous Poisson processes (NHPPs) to describe the bathtub-shaped failure intensity. In order to obtain the accurate change-point estimator, a novel hybrid method is developed combining bootstrap control charts with the sequential clustering approach. Through Monte Carlo simulations, the proposed change-point estimation method is compared with two powerful estimation procedures in various conditions. The results suggest that the proposed method performs effective and satisfactory for failure processes with no limits of distributions, changing ranges and sampling schemes. It especially provides higher precision and lower uncertainty in detecting small shifts of change. Finally, a case study analysing real failure data from a heavy-duty CNC machine tool is presented. The parameters of the proposed NHPP model are estimated. The change point of the early failure period and the random failure period is also calculated. These findings can contribute to determining the burn-in time in order to improve the reliability of the machine tool. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
50. Loglog fault-detection rate and testing coverage software reliability models subject to random environments
- Author
-
Pham, Hoang
- Published
- 2014
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.