next up previous contents
Next: Counting experiments Up: Normally distributed observables Previous: Combination of several measurements   Contents


Measurements close to the edge of the physical region

A case which has essentially no solution in the maximum likelihood approach is when a measurement is performed at the edge of the physical region and the measured value comes out very close to it, or even on the unphysical region. Let us take a numerical example.
Problem:
An experiment is planned to measure the (electron) neutrino mass. The simulations show that the mass resolution is $ 3.3\,$eV$ /c^2$, largely independent of the mass value, and that the measured mass is normally distributed around the true mass5.4. The mass value which results from the analysis procedure,5.5and corrected for all known systematic effects, is $ x=-5.41\,$eV$ /c^2$. What have we learned about the neutrino mass?
Solution:
Our a priori value of the mass is that it is positive and not too large (otherwise it would already have been measured in other experiments). One can take any vague distribution which assigns a probability density function between 0 and 20 or 30 eV$ /c^2$. In fact, if an experiment having a resolution of $ \sigma=3.3\,$eV$ /c^2$ has been planned and financed by rational people, with the hope of finding evidence of non-negligible mass, it means that the mass was thought to be in that range. If there is no reason to prefer one of the values in that interval a uniform distribution can be used, for example

$\displaystyle f_{\circ K}(m)=k=1/30\hspace{1.0cm} (0\le m \le 30)\,.$ (5.21)

Otherwise, if one thinks there is a greater chance of the mass having small rather than high values, a prior which reflects such an assumption could be chosen, for example a half normal with $ \sigma_\circ=10\,\rm {eV}$

$\displaystyle f_{\circ N}(m) =\frac{2}{\sqrt{2\,\pi}\,\sigma_\circ} \,\exp{\left[-\frac{m^2}{2\,\sigma_\circ^2}\right]} \hspace{1.0cm} (m \ge 0)\,,$ (5.22)

or a triangular distribution

$\displaystyle f_{\circ T}(m) = \frac{1}{450}\,(30-x) \hspace{.6cm} (0\le m \le 30)\,.$ (5.23)

Let us consider for simplicity the uniform distribution

$\displaystyle f(m\,\vert\,x, f_{\circ K})$ $\displaystyle =$ $\displaystyle \frac{
\frac{1}{\sqrt{2\,\pi}\,\sigma}
\,\exp{\left[-\frac{(m-x)^...
...i}\,\sigma}
\,\exp{\left[-\frac{(m-x)^2}{2\,\sigma^2}\right]}
\,k\, \rm {d}\mu}$ (5.24)
  $\displaystyle =$ $\displaystyle \frac{19.8}{\sqrt{2\pi}\sigma}\exp{\left[-\frac{(m-x)^2}{2\sigma^2}\right]}
\hspace{0.7 cm}(0 \le m \le 30)\,.$ (5.25)

The value which has the highest degree of belief is $ m=0$, but $ f(m)$ is non vanishing up to $ 30\,$eV$ /c^2$ (even if very small). We can define an interval, starting from $ m=0$, in which we believe that $ m$ should have a certain probability. For example this level of probability can be $ 95\, \%$. One has to find the value $ m_\circ$ for which the cumulative function $ F(m_\circ)$ equals 0.95. This value of $ m$ is called the upper limit (or upper bound). The result is

$\displaystyle m < 3.9\,$   eV$\displaystyle /c^2$   at $\displaystyle 0.95\,\%\ $   probability$\displaystyle \,.$ (5.26)

If we had assumed the other initial distributions the limit would have been in both cases

$\displaystyle m < 3.7\,$   eV$\displaystyle /c^2$   at $\displaystyle 0.95\,\%\ $   probability$\displaystyle \,,$ (5.27)

practically the same (especially if compared with the experimental resolution of $ 3.3\,$   eV$ /c^2$).
Comment:
Let us assume an a priori function sharply peaked at zero and see what happens. For example it could be of the kind

$\displaystyle f_{\circ S}(m)\propto \frac{1}{m}\,.$ (5.28)

To avoid singularities in the integral, let us take a power of $ m$ slightly greater than $ -1$, for example $ -0.99$, and let us limit its domain to 30, getting

$\displaystyle f_{\circ S}(m) = \frac{0.01\cdot 30^{0.01}}{m^{0.99}}\,.$ (5.29)

The upper limit becomes

$\displaystyle m < 0.006\,$   eV$\displaystyle /c^2$   at $\displaystyle \ 0.95\,\%\ \rm {probability}\,.$ (5.30)

Any experienced physicist would find this result ridiculous. The upper limit is less a $ 0.2\,\%$ of the experimental resolution; rather like expecting to resolve objects having dimensions smaller than a micron with a design ruler! Note instead that in the previous examples the limit was always of the order of magnitude of the experimental resolution $ \sigma$. As $ f_{\circ S}(m)$ becomes more and more peaked at zero (power of $ x\rightarrow 1$) the limit gets smaller and smaller. This means that, asymptotically, the degree of belief that $ m=0$ is so high that whatever you measure you will conclude that $ m=0$: you could use the measurement to calibrate the apparatus! This means that this choice of initial distribution was unreasonable.

Instead, priors motivated by the positive attitude of the researchers are much more stable, and even when the observation is ``very negative'' the result is stable, and one always gets a limit of the order of the experimental resolution. Anyhow, it is also clear that when $ x$ is several $ \sigma$ below zero one starts to suspect that ``something is wrong with the experiment'', which formally corresponds to doubts about the likelihood itself.


next up previous contents
Next: Counting experiments Up: Normally distributed observables Previous: Combination of several measurements   Contents
Giulio D'Agostini 2003-05-15