The problem can be solved exactly using the concept of conditional probability density function [see ()-()). We get

The best value of is shifted by an amount , with respect to the measured value , which is not exactly , as was naï vely guessed, and the uncertainty depends on , and . It is easy to be convinced that the exact result is more reasonable than the (suggested) first guess. Let us rewrite in two different ways:

- Eq. () shows that one has to apply the correction only if . If instead there is no correction to be applied, since the instrument is perfectly calibrated. If the correction is half of the measured difference between and .
- Eq. () shows explicitly what is going on and why the result is consistent with the way we have modelled the uncertainties. In fact we have performed two independent calibrations: one of the offset and one of . The best estimate of the true value of the ``zero'' is the weighted average of the two measured offsets.
- The new uncertainty of [see ()]
is a combination of and the uncertainty of the
weighted average of the two offsets. Its value is smaller than
it would be with only one calibration and, obviously,
larger than that due to the sampling fluctuations alone:
(5.86)