3.4.2 The Error In Estimating Probabilities
Measurable Outcome 3.8, Measurable Outcome 3.11, Measurable Outcome 3.12
Often, Monte Carlo simulations are used to estimate the probability of an event occurring. For example, in the turbine blade example, we might be interested in the probability that the hot metal temperature exceeds a critical value. Generically, suppose that the event of interest is \(A\). Then, an estimate of \(P\{ A\}\) is the fraction of times the event \(A\) occurs out of the total number of trials,
where \(N_ A\) is the number of times \(A\) occurred in the Monte Carlo simulation of sample size \(N\).
\(\hat{p}(A)\) is an unbiased estimate of \(P\{ A\}\). To see this, define a function \(I(A_ i)\) which equals 1 if event \(A\) occurred on the \(i\)-th trial, and equals zero if \(A\) did not occur. For example, if the event \(A\) is defined as \(y > y_{limit}\), \(I(A_ i)\) would be defined as,
Using this definition, the number of times that \(A\) occurred can be written as
Finding the expectation of \(N_ A\) gives,
Since we assume that the Monte Carlo trials are drawn at random and independently from each other, then \(E[I(A_ i)] = P\{ A\}\). Thus,
Finally, using this result we see that
We can also use the central limit theorem to show that \(\hat{p}(A)\) is normally distributed for large \(N\) with mean \(P\{ A\}\) and standard error,
In the exposition above, we calculated the standard error in estimating the probability of an event \(A\) defined as \(A=\{ y:y > y_{limit} \}\) using Monte Carlo sampling as \(\sqrt {\frac{P\{ A\} (1-P\{ A\} )}{N}}\). Now suppose we estimate the probability of the event \(B\), which is defined as \(B = \{ y:y > \frac{y_{limit}}{2} \}\) using the Monte Carlo method, with the same number of samples. The standard error in estimating this probability will be given by,
For estimating the standard error associated with computing \(P\{ B\}\), the exposition above remains the same, except \(P\{ A\}\) is replaced by \(P\{ B\}\).