next up previous contents
Next: Probability Up: Subjective probability and Bayes' Previous: Orig inal abstract of   Contents

Introduction to the ``primer''

The purpose of a measurement is to determine the value of a physical quantity. One often speaks of the true value, an idealized concept achieved by an infinitely precise and accurate measurement, i.e. immune from errors. In practice the result of a measurement is expressed in terms of the best estimate of the true value and of a related uncertainty. Traditionally the various contributions to the overall uncertainty are classified in terms of ``statistical'' and ``systematic'' uncertainties: expressions which reflect the sources of the experimental errors (the quotation marks indicate that a different way of classifying uncertainties will be adopted here).

``Statistical'' uncertainties arise from variations in the results of repeated observations under (apparently) identical conditions. They vanish if the number of observations becomes very large (``the uncertainty is dominated by systematics'' is the typical expression used in this case) and can be treated -- in most cases, but with some exceptions of great relevance in High Energy Physics -- using conventional statistics based on the frequency-based definition of probability.

On the other hand, it is not possible to treat ``systematic'' uncertainties coherently in the frequentistic framework. Several ad hoc prescriptions for how to combine ``statistical'' and ``systematic'' uncertainties can be found in textbooks and in the literature: ``add them linearly''; ``add them linearly if $ \ldots$, else add them quadratically''; ``don't add them at all'', and so on (see, e.g., Part 3 of Ref. [1]). The ``fashion'' at the moment is to add them quadratically if they are considered independent, or to build a covariance matrix of ``statistical'' and ``systematic'' uncertainties to treat general cases. These procedures are not justified by conventional statistical theory, but they are accepted because of the pragmatic good sense of physicists. For example, an experimentalist may be reluctant to add twenty or more contributions linearly to evaluate the uncertainty of a complicated measurement, or decides to treat the correlated ``systematic'' uncertainties ``statistically'', in both cases unaware of, or simply not caring about, violating frequentistic principles.

The only way to deal with these and related problems in a consistent way is to abandon the frequentistic interpretation of probability introduced at the beginning of this century, and to recover the intuitive concept of probability as degree of belief. Stated differently, one needs to associate the idea of probability with the lack of knowledge, rather than to the outcome of repeated experiments. This has been recognized also by the International Standardization Organization(ISO) which assumes the subjective definition of probability in its ``Guide to the expression of uncertainty in measurement''[3].

This primer is organized as follows:


next up previous contents
Next: Probability Up: Subjective probability and Bayes' Previous: Orig inal abstract of   Contents
Giulio D'Agostini 2003-05-15