next up previous contents
Next: Approximate methods Up: Evaluation of uncertainty: general Previous: Indirect measurements   Contents


Systematic errors

Uncertainty due to systematic effects is also included in a natural way in this approach. Let us first define the notation ($ i$ is the generic index): By influence quantities we mean:
$ \rightarrow$
all kinds of external factors which may influence the result (temperature, atmospheric pressure, etc.);
$ \rightarrow$
all calibration constants;
$ \rightarrow$
all possible hypotheses upon which the results may depend (e.g. Monte Carlo parameters).
From a probabilistic point of view, there is no distinction between $ \underline{\mu}$ and $ \underline{h}$: they are all conditional hypotheses for the $ \underline{x}$, i.e. causes which produce the observed effects. The difference is simply that we are interested in $ \underline{\mu}$ rather than in $ \underline{h}.$2.17

There are alternative ways to take into account the systematic effects in the final distribution of $ \underline{\mu}$:

  1. Global inference on $ f(\underline{\mu},
\underline{h})$. We can use Bayes' theorem to make an inference on $ \underline{\mu}$ and $ \underline{h}$, as described in Section [*]:

    $\displaystyle \underline{x} \Rightarrow f(\underline{\mu},
\underline{h}\,\vert\,
\underline{x})
\Rightarrow f(\underline{\mu}\,\vert\,\underline{x})\,.
$

    This method, depending on the joint prior distribution $ f_\circ(\underline{\mu},\underline{h})$, can even model possible correlations between $ \underline{\mu}$ and $ \underline{h}$ (e.g. radiative correction depending on the quantity of interest).
  2. Conditional inference (see Fig. [*]).
    Figure: Model to handle the uncertainty due to systematic errors by the use of conditional probability.
    \begin{figure}\centering\epsfig{file=dago88e.eps,clip=,width=10cm}\end{figure}
    Given the observed data, one has a joint distribution of $ \underline{\mu}$ for all possible configurations of $ \underline{h}$:

    $\displaystyle \underline{x} \Rightarrow f(\underline{\mu}\,\vert\,
\underline{x},\underline{h})\,.$

    Each conditional result is reweighed with the distribution of beliefs of $ \underline{h}$, using the well-known law of probability:

    $\displaystyle f(\underline{\mu}\,\vert\,\underline{x}) = \int f(\underline{\mu}\,\vert\, \underline{x},\underline{h})\cdot f(\underline{h})\,$d$\displaystyle \underline{h}\,.$ (2.6)

  3. Propagation of uncertainties. Essentially, one applies the propagation of uncertainty, whose most general case has been illustrated in the previous section, making use of the following model: One considers a raw result on raw values $ \underline{\mu}_R$ for some nominal values of the influence quantities, i.e.

    $\displaystyle f(\underline{\mu}_R\,\vert\,\underline{x},\underline{h}_\circ)\,;$

    then (corrected) true values are obtained as a function of the raw ones and of the possible values of the influence quantities, i.e.

    $\displaystyle \mu_i = \mu_i(\mu_{i_R}, \underline{h})\,.$

The three ways lead to the same result and each of them can be more or less intuitive to different people, and more less suitable for different applications. For example, the last two, which are formally equivalent, are the most intuitive for HEP experimentalists, and it is conceptually equivalent to what they do when they vary -- within reasonable intervals -- all Monte Carlo parameters in order to estimate the systematic errors.2.18 The third form is particularly convenient to make linear expansions which lead to approximated solutions (see Section [*]).

There is an important remark to be made. In some cases it is preferable not to `integrate'2.19 over all $ h$'s. Instead, it is better to report the result as $ f(\underline{\mu}\,\vert\,\{h\})$, where $ \{h\}$ stands for a subset of $ \underline{h}$, taken at their nominal values, if:

If results are presented under the condition of $ \{h\}$, one should also report the derivatives of $ \underline{\mu}$ with respect to the result, so that one does not have to redo the complete analysis when the influence factors are better known. A typical example in which this is usually done is the possible variation of the result due to the precise values of the charm-quark mass. A recent example in which this idea has been applied thoroughly is given in Ref. [26].


next up previous contents
Next: Approximate methods Up: Evaluation of uncertainty: general Previous: Indirect measurements   Contents
Giulio D'Agostini 2003-05-15