Some approximated rules

Having seen the utility of reshaping the posterior got from a flat prior, once a different prior is assumed, we also try to find some practical rules based on the mean and the standard deviations of the distributions involved.
  1. The first is based on Gaussian approximation, and it holds if both the prior and the posterior got by JAGS assuming a uniform prior appear somehow `bell-shaped', although we cannot expect that they are perfectly symmetric, especially if small or large values of $p$ are preferred. In this case the following (very rough) approximation is obtained for the mean and the standard deviation55
    $\displaystyle \mu_p$ $\displaystyle =$ $\displaystyle \frac{\mu_{\cal L}/\sigma^2_{\cal L} + \mu_0/\sigma_0^2}
{1/\sigma^2_{\cal L} + 1/\sigma_0^2}$ (86)
    $\displaystyle \frac{1}{\sigma^2_p}$ $\displaystyle =$ $\displaystyle \frac{1}{\sigma^2_{\cal L}}
+ \frac{1}{\sigma_0^2}\,,$ (87)

    where $\mu_{\cal L}$ and $\sigma_{\cal L}$ are the mean and the standard deviation got from JAGS with a flat prior; $\mu_0$ and $\sigma_0$ are those summarizing the priors; $\mu_p$ and $\sigma_p$ should be (approximately) equal to the JAGS results we had got using the prior summarized by $\mu_0$ and $\sigma_0$. Applying this rule to the case in Fig. [*], for which $\mu_{\cal L}=0.0987$, $\sigma_{\cal L}=0.0229$, $\mu_0=0.30$ and $\sigma_0=0.10$, we get $p=0.1087\pm 0.022$, that, rounding the uncertainty to one digit becomes ` $0.11\pm 0.02$', equal to the one obtained above by reshaping or re-running JAGS with the new prior.
  2. The second rule makes use of the Beta and its usage as prior conjugate when inferring $p$ of a binomial, as we have seen in Sec. [*]. The idea is to see the pdf estimated by JAGS with flat prior as a `rough Beta' whose parameters can be estimated from the mean and the standard deviation using Eqs. ([*])-([*]). We can then imagine that the pdf of $p$ could have been estimated by a `virtual' Poisson processes whose outcomes update the parameters of the Beta according to Eqs. ([*])-([*]). The trick consists then in modifying the Beta parameters according to the simple rules:
    $\displaystyle r_p$ $\displaystyle =$ $\displaystyle r_{\cal L} + r_0 - 1$  
    $\displaystyle s_p$ $\displaystyle =$ $\displaystyle s_{\cal L} + s_0 - 1\,,$  

    where $r_{\cal L}$ and $s_{\cal L}$ are evaluated from $\mu_{\cal L}$ and $\sigma_{\cal L}$ making use of Eqs. ([*]) and ([*]). Then the new mean and standard deviation are evaluated from $r_p$ and $s_p$ (see Sec. [*]).

    For example, in the case of Fig. [*] we have (with an exaggerated number of digits) $p=0.0987\pm 0.0229$, which could derive from a Beta having $r_{\cal L} = 16.7$ and $s_{\cal L} = 152.4$. If we have a prior somehow peaked around 0.3, e.g. $p_0=0.3\pm 0.1$, it can be parameterized by a Beta with $r_0=6$ and $s_0=14$. Applying the above rule we get

    $\displaystyle r_p$ $\displaystyle =$ $\displaystyle 16.7 + 6 - 1 = 21.7$  
    $\displaystyle s_p$ $\displaystyle =$ $\displaystyle 152.4 + 14 - 1 = 165.5 \,,$  

    which yield then $p = 0.1159 \pm 0.0223$, very similar to what was obtained by reshaping or re-running JAGS ( $0.12\pm 0.02$ at two decimal digits).
As we see, these approximated rules are rather rough, but they have the advantage of being fast to apply, if one wants to arrive quickly to some reasonable conclusions, based on her personal priors.56