Information Systems
Risk Management Examples with Simulation
Chapter 5: Information Systems Security Risk
In Book 1: Novel Six Sigma Approaches to Risk Assessment and Management
ISBN: 9781522527039
Author: Vojo Bubevski (Independent Researcher)
Abstract
The proposed method is applied to Software Engineering for security software quality management. DMAIC framework applies proven stochastic techniques as follows: 1. Define: Hypothetical software project is considered with a specified delivery target date and quality goal. The testing project is analysed uncompleted with weeks of testing remaining; 2. Measure: Simulation considers testing defects and predicts the number of defects at the end of the test; 3. Analyse: If the simulation confirms that the quality goal will be met, testing continues as is. Simulation regularly checks quality goals as testing progresses; 4. Improve: If the predicted quality is missing targets, the simulation predicts when the target will be achieved. There are two options, either more resources should be allocated to the project to rectify the problem, or the project should be delayed. An improvement project is defined to rectify the problem; 5. Control: Control is shown by using a very similar scenario with data for Quality Control, which applies slightly different models.
Keywords: Information systems; Security software; Six Sigma; DMAIC; Monte Carlo simulation
The Results
The experimental results, i.e., the predictions, are compared with the actual available data for verification. It should be underlined that there are no data available from System’s Operation. Thus, it is impossible to verify the predictions for improvements and predictions for control. Three comparisons are performed as presented below: a) Comparison by Week; b) Partial Data Comparison; and c) Overall Data Comparison.
Comparison by Week: The results are verified by comparing the predicted total number of defects per Defect-Type by week including the Total, versus the corresponding actual defects for period TI(13) – TI(15). This comparison is presented in Table 7.
Table 7: Results Comparison by Week
| Week 13 | Week 14 | Week 15 | ||||
| (Actual) Predicted | Error % | (Actual) Predicted | Error % | (Actual) Predicted | Error % | |
| SF | (6) 7 | 16.67 | (7) 6 | -14.29 | (4) 4 | 0 |
| SL | (14) 11 | -21.43 | (7) 9 | 28.57 | (15) 7 | -53.33 |
| Other | (8) 8 | 0 | (3) 7 | 133.33 | (1) 6 | 500 |
| Total | (28) 26 | -7.14 | (17) 22 | 29.41 | (20) 17 | -15 |
The SF defects and the Total are reasonably predicted. The SL defects are underestimated for two weeks and overestimated for one week with moderate errors. The Other defects are precisely predicted for one week and badly overestimated for two weeks. Overall, these prediction results are tolerable.
Partial Data Comparison: The results are verified by comparing the predicted total number of defects by Defect-Type including the Total for the three weeks TI(13) – TI(15), versus the corresponding actual defects. The software reliability measure Mean Time To Failure (MTTF) is also compared (Table 8).
Table 8. Partial Data Comparison
| Process | Defects | MTTF (Weeks) | ||||
| Actual | Pred. | Error % | Actual | Pred. | Error % | |
| SF | 17 | 17 | 0 | 0.8824 | 0.8824 | 0 |
| SL | 36 | 27 | -25 | 0.4167 | 0.5556 | 33.3333 |
| Other | 12 | 21 | 75 | 1.2500 | 0.7143 | -42.8571 |
| Total | 65 | 65 | 0 | 0.2308 | 0.2308 | 0 |
Here, the SF defects and the Total are exactly predicted. The SL defects are underestimated with a moderate error. The Other defects are substantially overestimated. Overall, these prediction results are acceptable.
Overall Data Comparison: The results are verified by comparing the actual and predicted total number of defects by Defect-Type including the Total for the entire period TI(1) – TI(15), with the corresponding actual defects; The MTTF is also compared. The period of observation is 15 weeks. This comparison is presented in Table 9.
Table 9. Overall Data Comparison
| Process | Defects | MTTF (Weeks) | ||||
| Actual | Pred. | Error % | Actual | Pred. | Error % | |
| SF | 907 | 907 | 0 | 0.0165 | 0.0165 | 0 |
| SL | 712 | 703 | -1.2640 | 0.0211 | 0.0213 | 1.2802 |
| Other | 251 | 260 | 3.5857 | 0.0598 | 0.0577 | -3.4615 |
| Total | 1870 | 1870 | 0 | 0.0080 | 0.0080 | 0 |
Again, the SF defects and the Total are accurately predicted. The SL defects are underestimated with minimal error. The Other defects are slightly overestimated. Thus, these prediction results are very good.
Considering the calculated errors in Table 7 – Table 9, the experimental results are satisfactorily verified. It should be emphasised that the DMAIC-Simulation analysis is more reliable compared to the conventional models. This is because the variability and uncertainty in the software quality process are catered for by applying probability tools. This substantially increases the confidence in the DMAIC-simulation decision support, which is very important for the project.
Chapter 2: Analysis of Software Quality Risk
In Book 9: Miscellaneous Risk Analysis
ISBN: 978-620-3-20060-7
Author: Vojo Bubevski (Independent Researcher)
Abstract:
This chapter demonstrates an Analysis of software quality risk that applies a stochastic DMAIC method. This new practical method uses Six Sigma and Monte Carlo Simulation for ongoing quality risk management. DMAIC is systematically applied as a tactical framework to enhance the process and improve quality. The simulation predicts quality (reliability) at the expected process end and identifies and quantifies risk. DMAIC is a verified structured methodology for systematic process and quality improvements. Monte Carlo Simulation is superior to conventional risk models. These synergetic enhancements eliminate observed deficiencies. The method has been successfully proven and applied practically on real in-house projects. Substantial savings, quality, and customer satisfaction have been achieved. An application on an internal project and obtained results are presented. The method is simplistically elaborated on a published third-party project answering key research questions from practical perspectives. This CMMI® compliant method offers important benefits including savings, quality, and customer satisfaction.
Keywords: Risk Analysis, Software Quality, Software Reliability, Six Sigma, DMAIC, Sensitivity Analysis, What-If Analysis, Monte Carlo simulation, Stochastic model.
The Results
To analise the process, the software reliability for the future period of three months, i.e. from TI(13) to TI(15). The simulation model is the discrete simulation of Musa’s Basic Execution Time Model. Generally, this model is used to predict the future course of the FIF.
The Poisson distribution is used in the model to account for the variability and uncertainty of reliability. The FIF by Defect-Type are simulated for the three months TI(13) – TI(15), thus the mean of the Poisson distribution for every defect type for TI(13), TI(14), and TI(15) will be equal to the value of the approximated FIF for TI(13), TI(14) and TI(15) respectively (1 – 3). The Total for TI(13), TI(14), and TI(15) is simply the sum of defects for all three types for TI(13), TI(14), and TI(15) respectively (4).
To define the quality target according to the Quality Goal statement, the FIF will be used. The target is to achieve a rate of change of the CFF in Month 15 of one defect per Defect-Type and three defects in total, i.e., the FIF should be equal to one for all defect types and three defects for the Total in the final month of testing.
To define the quality targets in the model, the Six Sigma Target Value, Lower Specified Limit (LSL), and Upper Specified Limit (USL) will be used: a) Target Value is one for all defect types and three defects for the Total; b) USL is two for all defect types and six for the Total; b) LSL should be zero, but it will be set to a very small negative number to prevent an error in the Six Sigma metrics calculations, i.e., LSL is -0.0001 for all defect types including the Total.
The Six Sigma metrics in the model are used to measure the performance of the process. In this model the following Six Sigma metrics are used: a) Process Capability (Cp); b) Sigma Level; and c) Probability of Non-Compliance (PNC).
To apply the Poisson distribution, the @RISK® Poisson distribution function was used. To calculate Standard Deviation, Minimum Value, and Maximum Value, the corresponding @RISK® functions for Standard Deviation, Minimum Value, and Maximum Value were used. To calculate the Six Sigma Cp, PNC, and Sigma Level metrics, the @RISK® Six Sigma Cp, PNC, and Sigma Level functions were respectively used. For the Six Sigma Target Value, USL, and LSL, constants were specified. It should be noted that the Six Sigma Cp, PNC, and Sigma Level metrics are calculated from the resulting probability distribution from the simulation.
The project analysis results are displayed in Table 4.
Table 4: Predicted FIF for Month 15
| Process | µ | σ | Min Value | Max Value |
| Function | 4 | 2.06 | 0 | 14 |
| A/I/C/A | 7 | 2.62 | 0 | 19 |
| Misc. | 6 | 2.47 | 0 | 19 |
| Total | 17 | 4.19 | 5 | 36 |
The predicted Mean (µ) of the total number of defects by Defect-Type in the final month of testing TI(15) including the Total is given. Also, the associated Standard Deviation (σ) and Minimum and Maximum Values obtained from the simulation are shown.
The predicted Total in TI(15) is 17, with a Standard Deviation of 4.19 defects. This indicates that the product will not be stable for delivery at the end of Month 15.
Chapter 3: Security Software Quality Risk Analysis
In Book 9: Miscellaneous Risk Analysis
ISBN: 978-620-3-20060-7
Author: Vojo Bubevski (Independent Researcher)
Abstract:
The conventional approach to security software quality management specifically for ongoing projects has two major limits: (1) Six Sigma is not applied, and (2) analytic risk models are used. This chapter proposes a stochastic method, which applies Six Sigma Define, Measure, Analyse, Improve, and Control (DMAIC), Monte Carlo Simulation, and Orthogonal Security Defect Classification (OSDC). DMAIC is tactically applied to assess and improve quality. The simulation predicts quality (reliability) and identifies and quantifies the quality risk. OSDC allows qualitative analysis. DMAIC is a verified structured methodology for systematic process and quality improvements. Simulation is superior to analytic risk models. OSDC offers qualitative improvements. This synergetic method eliminates observed deficiencies gaining important benefits including savings, quality, and customer satisfaction. It is CMMI® (Capability Maturity Model Integration) compliant. The method is simplistically elaborated on a published third-party project.
Keywords: Risk Analysis, Security Software, Quality Management, Six Sigma, DMAIC, Monte Carlo simulation, Stochastic model.
The Results
The experimental results, i.e., the predictions, are compared with the actual available data for verification. As there are no data available from System’s Operation, it is impossible to verify the predictions for improvements and predictions for control. Two comparisons are performed as presented below, Partial Data Comparison and Overall Data Comparison.
The results are verified by comparing the predicted total number of defects by Defect-Type including the Total for period TI(13) – TI(15), versus the corresponding actual defects. The predicted software reliability MTTF is also compared with the actual reliability respectively. The period of observation is three months. This comparison is presented in Table 11.
Table 11: Partial Data Comparison
| Process | Defects | MTTF (3 Months) | ||||
| Actual | Pred. | Error % | Actual | Pred. | Error % | |
| SF | 17 | 17 | 0 | 0.8824 | 0.8824 | 0 |
| SL | 36 | 27 | -25 | 0.4167 | 0.5556 | 33.3333 |
| Misc. | 12 | 21 | 75 | 1.2500 | 0.7143 | -42.8571 |
| Total | 65 | 65 | 0 | 0.2308 | 0.2308 | 0 |
The SF defects and the Total are accurately predicted. The SL defects are underestimated with an acceptable error. The Misc. defects are significantly overestimated. Overall, these prediction results are acceptable.
The results are verified by comparing the predicted total number of defects by Defect-Type including the Total for period TI(13) – TI(15), versus the corresponding actual defects. The predicted software reliability MTTF is also compared with the actual reliability respectively. The period of observation is three months. This comparison is presented in Table 12.
Table 12: Overall Data Comparison
| Process | Defects | MTTF (3 Months) | ||||
| Actual | Pred. | Error % | Actual | Pred. | Error % | |
| SF | 907 | 907 | 0 | 0.0165 | 0.0165 | 0 |
| SL | 712 | 703 | -1.2640 | 0.0211 | 0.0213 | 1.2802 |
| Misc. | 251 | 260 | 3.5857 | 0.0598 | 0.0577 | -3.4615 |
| Total | 1870 | 1870 | 0 | 0.0080 | 0.0080 | 0 |
Again, the SF defects and the Total are accurately predicted. The SL defects are underestimated with minimal error. The Misc. defects are slightly overestimated. Thus, these prediction results are very good.
Considering the calculated errors in Table 11 and Table 12, the experimental results are satisfactorily verified. The DMAIC-Simulation analysis is more reliable compared to the conventional models. This is because the variability and uncertainty in the software quality process are catered for by applying probability tools. This substantially increases the confidence in the DMAIC-simulation decision support, which is very important for the project.
Chapter 4: Comparative Software Reliability Risk Analysis
In Book 9: Miscellaneous Risk Analysis
ISBN: 978-620-3-20060-7
Author: Vojo Bubevski (Independent Researcher)
Abstract:
Achieving software reliability goals is a major objective for software development organizations as it is a critical constraint on their projects. Predicting software reliability based on data already available is an important challenge for software projects. The conventional approach to Software Reliability prediction is based on analytic models. These models don’t account for software processes’ inherent variability and uncertainty and require estimation of parameters and unrealistic/oversimplified assumptions – apparent deficiencies. This paper presents applications of Simulation and Neural Networks in Software Reliability prediction from the practitioners’ perspective. Different simulation and neural network models are used to predict the reliability of a real software system using published data. The predictive capability of the models is evaluated using the actual data. A comparison of simulation and neural network models versus analytic models is provided. Simulation and Neural Network models offer a superior alternative to conventional analytic models.
Keywords: Risk Analysis, Neural Networks, Monte Carlo Simulation, Software Reliability Prediction, Stochastic model.
The Results
The published results of the Galileo software system reliability prediction given above are used to evaluate the results of the Simulation and Neural Networks models in this paper. Table 7 shows the evaluation and comparison of the models. The comparison shows that the Simulation and Neural Networks’ models are superior to both the analytic models and the simulation model using the special-purpose simulator.
Table 7: Models’ Evaluation and Comparison
| PredictionModel | Total DefectsPredicted | Total DefectsActual | Model Absolute Error | Model Percent. Error |
| JM | 381 | 351 | 30 | 8.55% |
| MO | 367 | 351 | 16 | 4.56% |
| LV | 614 | 351 | 263 | 74.93% |
| Simulator | 341 | 351 | -10 | -2.85% |
| MLFN LPRM | 347 | 351 | -4 | -1.14% |
| MLFN RGM | 345 | 351 | -6 | -1.71% |
| Simulation LPRM | 348 | 351 | -3 | -0.86% |
| Simulation RGM | 345 | 351 | -6 | -1.71% |
Simulation and Neural Network models based on the Logarithmic Poisson Reliability model are better than the respective models based on the Reliability Growth Model. Also, the Simulation LPRM is slightly better than the MLFN LPRM. Thus, the best-predicting model for Galileo’s CDS reliability is the Simulation LPR Model.