Setting up the problem

As we have seen in Sec. [*], the inference of the `unobserved' variables, based on the `observed' one, for the problem represented graphically in the `Bayesian' network of Fig. [*], consists in evaluating `somehow'
$\displaystyle \hspace{1cm}$   $\displaystyle f(p,n_I,n_{NI},n_{P_I},n_{P_{NI}},\pi_1,\pi_2\,\vert\,n_P,n_s,r_1,s_1,r_2,s_2)\,,
% \hspace{2.4cm}\mbox{(C.1)}
$ (88)

from which the most interesting probability distribution, at least for the purpose of this paper,
$\displaystyle \hspace{1cm}$   $\displaystyle f(p\,\vert\,n_P,n_s,r_1,s_1,r_2,s_2)$  

can be obtained by marginalization (see also Appendix A). Besides a normalization factor, Eq. ([*]) is proportional to Eq. ([*]), hereafter indicated by `$f(\ldots)$' for compactness, which can be written making use of the chain rule obtained following the bottom-up analysis of the graphical model of Fig. [*]:
$\displaystyle f(\ldots)$ $\displaystyle =$ $\displaystyle f(n_P\,\vert\,n_{P_I},n_{P_{NI}}) \cdot
f(n_{P_I}\,\vert\,\pi_1,n_I) \cdot
f(n_{P_{NI}}\,\vert\,\pi_2,n_{NI}) \cdot
f(\pi_1\,\vert\,r_1,s_1)\cdot$  
    $\displaystyle f(\pi_2\,\vert\,r_2,s_2) \cdot f(n_{NI}\,\vert\,n_s,n_I) \cdot f(n_I\,\vert\,p,n_s)
\cdot f_0(p)$  

in which
$\displaystyle f(n_P\,\vert\,n_{P_I},n_{P_{NI}})$ $\displaystyle =$ $\displaystyle \delta_{n_p,\,n_{P_I}+n_{P_{NI}}}$ (89)
$\displaystyle f(n_{P_I}\,\vert\,\pi_1,n_I)$ $\displaystyle =$ $\displaystyle \binom{n_I} {n_{P_I}} \cdot
\pi_1^{n_I}\cdot (1-\pi_1)^{n_I-n_{P_I}}$ (90)
$\displaystyle f(n_{P_{NI}}\,\vert\,\pi_2,n_{NI})$ $\displaystyle =$ $\displaystyle \binom{n_{NI}} {n_{P_{NI}}}\cdot
\pi_2^{n_{P_{NI}} } \cdot (1-\pi_2)^{n_{NI}-n_{P_{NI}\lq }}$ (91)


$\displaystyle f(\pi_1\,\vert\,r_1,s_1)$ $\displaystyle =$ $\displaystyle \frac{\pi_1^{r_1-1}\cdot(1-\pi_1)^{s_1-1}}{\beta(r_1,s_1)}$ (92)
$\displaystyle f(\pi_2\,\vert\,r_2,s_2)$ $\displaystyle =$ $\displaystyle \frac{\pi_2^{r_2-1}\cdot(1-\pi_2)^{s_2-1}}{\beta(r_2,s_2)}$ (93)
$\displaystyle f(n_{NI}\,\vert\,n_s,n_I)$ $\displaystyle =$ $\displaystyle \delta_{n_{NI},\,n_s-n_{I}}$ (94)
$\displaystyle f(n_I\,\vert\,p,n_s)$ $\displaystyle =$ $\displaystyle \binom{n_s}{n_I}\cdot p^{n_I}\cdot(1-p)^{n_s-n_I} \,,$ (95)

where $\delta_{m,k}$ is the Kroneker delta (all other symbols belong to the definitions of the binomial and the Beta distributions) and we have left to define the prior distribution $f_0(p)$. The distribution of interest is then obtained by summing up/integrating
$\displaystyle f(p\,\vert\,n_P,n_s,r_1,s_1,r_2,s_2)$ $\displaystyle \propto$ $\displaystyle \sum_{n_I}\sum_{n_{NI}}\sum_{n_{P_I}}\sum_{n_{P_{NI}}}\int\!\!\int\! f(\ldots)\,$d$\displaystyle \pi_1$   d$\displaystyle \pi_2\,,$  

where the limits of sums and integration will be written in detail in the sequel.

As a first step we simplify the equation by summing over $n_{P_{NI}}$ and $n_{NI}$ and exploiting the Kroneker delta terms ([*]) and ([*]). We can then replace $n_{P_{NI}}$ with $n_P - n_{P_{I}}$ and $n_{NI}$ with $n_s - n_I$

$\displaystyle f(n_P\!-\!n_{P_I}\vert n_s\!-\!n_{I},\pi_2)\!$ $\displaystyle =$ $\displaystyle {\binom{n_s\!-\!n_I} {n_P\!-\!n_{P_I}}}
\cdot {\pi_2}^{(n_P-n_{P_I})} \cdot (1-\pi_2)^{(n_s-n_I)-(n_P-n_{P_I})}\ \ \ \ $ (96)

with the obvious constraints $n_s-n_I > n_P-n_{P_{I}}$ (i.e. $n_{NI} > n_{P_{NI}}$) and $n_{P_I} < n_I$.

The inferential distribution of interest $f(p\,\vert\,n_P,n_s,r_1,s_1,r_2,s_2)$, becomes then, besides constant factors and indicating all the status of information on which the inference is based as `$I$', that is $I \equiv \{ n_P,n_s,r_1,s_1,r_2,s_2\}$,

$\displaystyle f(p\,\vert\,I)$ $\displaystyle \propto$ $\displaystyle f_0(p) \cdot \sum_{n_{P_I}=0}^{n_P}\sum_{n_I=0}^{n_s}
\int_{0}^{1...
...\!f({n_{P_I}}\,\vert\, n_I,\pi_1)
\cdot f(n_P-n_{P_I}\vert n_s-n_I,\pi_2) \cdot$  
    $\displaystyle f(\pi_1\,\vert\,r_1,s_1) \cdot f(\pi_2\,\vert\,r_2,s_2)
\cdot f(n_I\,\vert\,p,n_s)\,$   d$\displaystyle \pi_1\,$   d$\displaystyle \pi_2$  
  $\displaystyle \propto$ $\displaystyle f_0(p) \cdot \sum_{n_{P_I}=0}^{n_P}\sum_{n_I=0}^{n_s}\,
\int_{0}^...
...nom{n_I} {n_{P_I}}} \cdot {\pi_1}^{n_{P_I}}
\cdot (1-\pi_1)^{n_I-n_{P_I}} \cdot$  
    $\displaystyle {\binom{n_s-n_I} {n_P-n_{P_I}}} \cdot {\pi_2}^{(n_P-n_{P_I})} \cdot
(1-\pi_2)^{(n_s-n_I)-(n_P-n_{P_I})} \cdot$  
    $\displaystyle \pi_1^{r_1-1} \cdot (1-\pi_1)^{s_1-1} \cdot
\pi_2^{r_2-1} \cdot (...
...2)^{s_2-1} \cdot \nonumber
\binom{n_s}{n_I} \cdot p^{n_I} \cdot (1-p)^{n_s-n_I}$   d$\displaystyle \pi_1$   d$\displaystyle \pi_2 \nonumber$  


  $\displaystyle \propto$ $\displaystyle f_0(p) \cdot \sum_{n_{P_I}=0}^{n_P}\sum_{n_I=0}^{n_s}
\binom{n_I}...
...} \cdot
\binom{n_s}{n_I} \! \cdot \! p^{n_I} \! \cdot \! (1-p)^{n_s-n_I}\cdot\!$  
    $\displaystyle \int_{0}^{1}\! {\pi_1}^{n_{P_I}+r_1-1}\cdot (1-\pi_1)^{n_I-n_{P_I}+s_1-1}\,$d$\displaystyle \pi_1$ (97)
    $\displaystyle \int_{0}^{1}\! {\pi_2}^{(n_P-n_{P_I}+r_2-1)} \cdot
(1-\pi_2)^{(n_s-n_I)-(n_P-n_{P_I})+s_2-1}\,$d$\displaystyle \pi_2$  

where we have dropped all the terms not depending on the variables summed up/integrated.

The two integrals appearing in Eq. ([*]) are, in terms of the generic variable $x$, of the form $\int_0^1x^{\alpha-1}\cdot (1-x)^{\beta-1}$d$x$, which defines the special function beta B$(\alpha,\beta)$, whose value can be expressed in terms of Gamma function as B$(\alpha,\beta) = \Gamma(\alpha)\cdot \Gamma(\beta)/
\Gamma(\alpha+\beta)$. We get then

$\displaystyle f(p\,\vert\,I)$ $\displaystyle \propto$ $\displaystyle f_0(p) \cdot \sum_{n_{P_I}=0}^{n_P}\sum_{n_I=0}^{n_s}
\left[\bino...
...{P_I}} \cdot
\binom{n_s}{n_I} \cdot p^{n_I} \cdot (1-p)^{n_s-n_I} \cdot \right.$  
    $\displaystyle \left.\frac{\Gamma (n_{P_I}+r_1)\cdot \Gamma(n_I-n_{P_I}+s_1)}
{\Gamma(r_1 + n_I+s_1)} \cdot\right.$ (98)
$\displaystyle %\hspace{6.1cm}\, \mbox{(C.5)}
$   $\displaystyle \left.\frac{\Gamma (n_P-n_{P_I}+r_2)\cdot \Gamma(n_s-n_I-n_P+n_{P_I}+s_2)}
{ \Gamma(n_s-n_I+s_2+r_2)} \right] \,.$