The Unnecessity of Assuming Statistically Independent Tests in Bayesian Software Reliability Assessments

07/31/2022
by   Kizito Salako, et al.
0

When assessing a software-based system, the results of statistical inference on operational testing data can provide strong support for software reliability claims. For inference, this data (i.e. software successes and failures) is often assumed to arise in an independent, identically distributed (i.i.d.) manner. In this paper we show how conservative Bayesian approaches make this assumption unnecessary, by incorporating one's doubts about the assumption into the assessment. We derive conservative confidence bounds on a system's probability of failure on demand (pfd), when operational testing reveals no failures. The generality and utility of the confidence bounds are demonstrated in the assessment of a nuclear power-plant safety-protection system, under varying levels of skepticism about the i.i.d. assumption.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset