n.I <- rbinom(nr, ns, p) # 1. n.NI <- ns - n.I pi1 <- rbeta(nr, r1, s1) # 2. pi2 <- rbeta(nr, r2, s2) nP.I <- rbinom(nr, n.I, pi1) # 3. nP.NI <- rbinom(nr, n.NI, pi2) nP <- nP.I + nP.NI # 4.We just need to define the parameters of interest, including nr, number of extractions, run the code and add other instructions in order to print and plot the results (a minimalist script performing all tasks is provided in Appendix B.5). The only instructions worth some further comment are the two related to the step nr. 3. A curious feature of the `statistical functions' of R is that they can accept vectors for the number of trials and for probability at each trial, as it is done here. It means that internally the random generator is called nr times, the first time e.g. with n.I1 and pi11, the second time with n.I2 and pi12, and so on, thus avoiding us to use explicit loops. Note that, if precise values of and were assumed, then we just need to replace the two lines of step nr. 2 with the assignment of their numeric values.
Since, as we have already seen, the limiting effect is due to systematics, and in particular, in our case, to the uncertainty about , we show in Fig.
how the result changes if we reduce to the level of .39As we can see (a result largely expected), there is quite a sizable improvement in separability of values of for large values of . Again qualitatively, we can see that values of which differ by can be resolved. the case in which sensitivity and specificity are equal, both as expected value and standard uncertainty. The first thing we note in these new histograms is that for they are no longer symmetric and Gaussian-like. This is due to the fact that no negative values of are possible, and then there is a kind of `accumulation' for low values of , and therefore of (this kind of skewness is typical of all probability distributions of variables defined to be positive and whose standard deviation is not much smaller than the expected value - think e.g. at a Poissonian with ).