next up previous
Next: Inference from a data Up: Inferring numerical values of Previous: Binomial model


Poisson model

The Poisson distribution gives the probability of observing $n$ counts in a fixed time interval, when the expectation of the number of counts to be observed is $\lambda $:
$\displaystyle p(n\,\vert\,\lambda)$ $\textstyle =$ $\displaystyle \frac{\lambda^n e^{-\lambda}}{n!}\,.$ (43)

The inverse problem is to infer $\lambda $ from $n$ counts observed. (Note that what physically matters is the rate $r=\lambda/\Delta T$, where $\Delta T$ is the observation time.) Applying Bayes' theorem and using a uniform prior $p(\lambda\,\vert\,I)$ for $\lambda $, we get
\begin{displaymath}
p(\lambda\,\vert\,n,I) = \frac{\displaystyle\frac{\lambda^n ...
... \,\rm {d}\lambda}}
= \frac{\lambda^n\, e^{-\lambda}}{n!} \,.
\end{displaymath} (44)

As for the Gaussian model, the same mathematical expression holds for the likelihood, but with interchanged role of variable and parameter. Expectation and variance of $\lambda $ are both equal to $n+1$, while the most probable value is $\lambda_m =n$. For large $n$, the extra `$+1$' (due to the asymmetry of the prior with respect to $\lambda=0$) can be ignored and we have $\mbox{E}(\lambda)=\sigma^2(\lambda)\approx n$ and, once again, the uncertainty about $\lambda $ follows a Gaussian model. The relative uncertainty on $\lambda $ decreases as $1/\sqrt{n}$.

When the observed value of $n$ is zero, Eq. (44) yields $p(\lambda\,\vert\,n=0)=e^{-\lambda}$, giving a maximum of belief at zero, but an exponential tail toward large values of $\lambda $. Expected value and standard deviation of $\lambda $ are both equal to 1. The 95% probabilistic upper bound of $\lambda $ is at $\lambda_{95\% UB}=3$, as it can be easily calculated solving the equation $\int_0^{\lambda_{95\% UB}}\!\!p(\lambda\,\vert\,n=0)\,\mbox{d}\lambda=0.95$. Note that also this result depends on the choice of prior, though Astone and D'Agostini (1999) have shown that the upper bound is insensitive to the exact form of the prior, if the prior models somehow what they call ``positive attitude of rational scientists'' (the prior has not to be in contradiction with what one could actually observe, given the detector sensitivity). In particular, they show that a uniform prior is a good practical choice to model this attitude. On the other hand, talking about `objective' probabilistic upper/lower limits makes no sense, as discussed in detail and with examples in the cited paper: one can at most speak about conventionally defined non-probabilistic sensitivity bounds, which separate the measurement region from that in which experimental sensitivity is lost (Astone and D'Agostini 1999, D'Agostini 2000, Astone et al 2002).


next up previous
Next: Inference from a data Up: Inferring numerical values of Previous: Binomial model
Giulio D'Agostini 2003-05-13